CategoriesServerlessData

DynamoDB Incremental Export with Step Functions

When working on building solutions, the answer to some problems is often, it depends. For instance, if I need to deal with data as it changes and use DynamoDB, streams are the perfect feature to take advantage of. However, some data doesn’t need to be dealt with in real-time, once a day or every 30 minutes might be good enough. This was problematic up until recently, as AWS released incremental exports with DynamoDB. In this article, I want to explore building an incremental export with DynamoDB and Step Functions.

CategoriesLeadership

Building Trust

I’ve been spending a lot of time lately being self-reflective and this week I was thinking about building trust. The more I thought about trust, the more I wanted to write something about how to build trust with your team, lead or even boss. I’ve got a unique perspective having been a team member, team lead and lately the head of a department. Trust is something that takes time to build and an instant to destroy. But once you gain it, pairing it with influence and credibility is a powerful combination for you as a technologist. Let’s dig into building trust in a technology team.

CategoriesServerlessProgramming

WebSocket with AWS API Gateway

I was working recently with some backend code and I needed to communicate the success or failure of the result back to my UI. I instantly knew that I needed to put together a WebSocket to handle this interaction between the backend and the front end. With all the Serverless and non-Serverless options out there though, which way do I go? How about plain old WebSockets with AWS API Gateway and Serverless?

CategoriesObservability

Analyzing and Correcting Errors with Advanced SQS Redrive

A good friend of mine is working on a really neat redrive tool with SQS and wanted to write an article to describe its purpose and use. I’m super honored that he asked me to share his writing on my blog. Please find below Adam Tran’s “Analyzing and Correcting Errors with Advanced SQS Redrive”

Analyzing and Correcting Errors with Advanced SQS Redrive

Analyzing dead-letter queues (DLQs) within the AWS ecosystem can be tricky. Receiving and analyzing messages via the AWS Console is very limited, and does not allow for the manipulation of messages in any sensible manner. Sure, you can redrive an entire DLQ, but what if you need to analyze thousands of messages or make changes?

There are many potential solutions to this problem, but a simple solution that I’ve developed is to download your queues’ messages locally where they can be analyzed with any tool of your choosing. I’ve defined a stateful directory structure to reflect where a message is in its journey of analysis so that you can make changes in whatever manner you deem appropriate.

CategoriesConference

Attending your First re:Invent

First off, this post is not affiliated with AWS or the re:Invent conference. The content below contains opinions and some tips for navigating your first re:Invent conference. I believe that if you are building on AWS and want to elevate your experience, network and opportunities with the platform, re:Invent is something you need to attend and put on your calendar year over year. Let’s get started and talk about attending your first re:Invent conference.

CategoriesInfrastructureObservability

Monitoring SQS with Datadog

Event-Driven architecture paired with Serverless technologies are a powerful combo to build applications. But failure does happen and you should expect it to happen. Dealing with that failure is often done by dead-lettering messages into a Dead-Letter-Queue. But what do you do in order to monitor those queues? Most people start manually checking them or perhaps adding a CloudWatch Alarm that triggers an SNS topic. What I’d like to show you is a more advanced version of this monitoring through some code, constructs and AWS CodeSuite of tools. Say hello to monitoring SQS with Datadog.

CategoriesProgrammingServerless

DynamoDB Streams EventBridge Pipes Multiple Items

I’ve written a few articles lately on EventBridge Pipes and specifically around using them with DynamoDB Streams. I’ve written about Enrichment. And I’ve written about just straight Streaming. I believe that using EventBridge Pipes plays a nice part in a Serverless, Event-Driven approach. So in this article, I want to explore Streaming DynamoDB to EventBridge Pipes with multiple items in one table.

Several of the comments I received about Streaming DynamoDB to EventBridge Pipes were around, “What if I have multiple item collections in the same table?”. I intend to show a pattern for handling that exact problem in this article. At the bottom, you’ll find a working code sample that you can deploy and build on top of. I’ve used this exact setup in production, so rest assured that this is a great base to start from.

CategoriesDataServerless

DynamoDB Streams EventBridge Pipes Enrichment

I’ve been wanting to spend more time lately talking about AWS HealthLake. And then more specifically, Fast Healthcare Interoperable Resources (FHIR) which is the foundation for interoperability in healthcare information systems. I believe very strongly that Serverless is for more than just client and user-driven workflows. I wrote extensively about it here but I wanted to take a deeper dive into building out streams of dataflows. I’ve been using this pattern for quite some time in production, so let’s have a look at EventBridge Pipes enriching DynamoDB Streams.

CategoriesData

AWS HealthLake Exports

In my previous article I wrote about a Callback Pattern with AWS Step Functions built upon the backbone of HealthLake’s export. As much as I went deep with code on the Callback portion, I felt that I didn’t give the HealthLake side of the equation enough run. So this article is that adjustment. Managing exports with AWS HealthLake.

What is HealthLake

AWS HealthLake is a HIPAA-eligible service that provides FHIR APIs that help healthcare and life sciences companies securely store, transform, transact, and analyze health data in minutes to give a chronological view at the patient and population-level. – AWS

My words on that are that HealthLake is a FHIR-compliant database that gives a developer a robust set of APIs to build patient-centered applications. You can use HealthLake for building transactional applications, analyze large volumes of data, store structured and semi-structured information and build analytics and reports.

When building with HealthLake I find it fits in one of two places.

  1. As the transactional center for your Healthcare application. It is highly patient centered, very scalable and contains APIs for working with each resource. In addition, it provides SMART on FHIR capabilities that make it nice choice for building an application on top of.
  2. As the aggregation point for many external and internal systems in a LakeHouse style architecture for interopability and reporting. When you’ve got a distributed system with various datababases and you need your data reunited in one location. HealthLake does that. Or if you are pulling in data from various external sources, HealthLake can do that too. I wrote about doing this with Serverless a while back.