I’ve been working recently with some data that doesn’t naturally fit into my AWS HealthLake datastore. I have some additional information captured in a DynamoDB table that would be useful to blend with HealthLake but on its own is not an FHIR resource. I pondered on this for a while and came up with the idea of piping DynamoDB stream changes to S3 so that I could then pick up with AWS Glue. In this article, I want to show you an approach to building a partitioned S3 bucket from DynamoDB. Refining that further with Glue jobs, tables and crawlers will come later.
Tag: typescript
DynamoDB Incremental Export with Step Functions
When working on building solutions, the answer to some problems is often, it depends. For instance, if I need to deal with data as it changes and use DynamoDB, streams are the perfect feature to take advantage of. However, some data doesn’t need to be dealt with in real-time, once a day or every 30 minutes might be good enough. This was problematic up until recently, as AWS released incremental exports with DynamoDB. In this article, I want to explore building an incremental export with DynamoDB and Step Functions.
Analyzing and Correcting Errors with Advanced SQS Redrive
A good friend of mine is working on a really neat redrive tool with SQS and wanted to write an article to describe its purpose and use. I’m super honored that he asked me to share his writing on my blog. Please find below Adam Tran’s “Analyzing and Correcting Errors with Advanced SQS Redrive”
Analyzing and Correcting Errors with Advanced SQS Redrive
Analyzing dead-letter queues (DLQs) within the AWS ecosystem can be tricky. Receiving and analyzing messages via the AWS Console is very limited, and does not allow for the manipulation of messages in any sensible manner. Sure, you can redrive an entire DLQ, but what if you need to analyze thousands of messages or make changes?
There are many potential solutions to this problem, but a simple solution that I’ve developed is to download your queues’ messages locally where they can be analyzed with any tool of your choosing. I’ve defined a stateful directory structure to reflect where a message is in its journey of analysis so that you can make changes in whatever manner you deem appropriate.
Monitoring SQS with Datadog
Event-Driven architecture paired with Serverless technologies are a powerful combo to build applications. But failure does happen and you should expect it to happen. Dealing with that failure is often done by dead-lettering messages into a Dead-Letter-Queue. But what do you do in order to monitor those queues? Most people start manually checking them or perhaps adding a CloudWatch Alarm that triggers an SNS topic. What I’d like to show you is a more advanced version of this monitoring through some code, constructs and AWS CodeSuite of tools. Say hello to monitoring SQS with Datadog.
Lambda Extension with Golang
For full disclosure, I’ve been writing Lambda function code since 2017 and I completely breezed over the release of Lambda Extensions back in 2020. Here’s the release announcement. At the core of extensions, you have internal and external options. For the balance of this article, I’m going to focus on building a Lambda extension with Golang and lean into the external style approach.
Extensions and Why
Taking a quick step back, why extensions? From an architect level of thinking, extensions give me the ability to have cross-team reuse of code without being tied to a particular language or build process. For something like Node or Python, you could use a standard Layer to package your Lambda reuse. But for something like Golang, where your code is packaged at build time and not run-time, then you sort of have to look at the shared library. I wrote about that here. But what if you wanted to create some shared functionality that was usable regardless of which language you built your Lamabda in? That seems to have some serious appeal for my current projects where teams are using different stacks to build their APIs due to need and comfort.