I’ve been thinking about this topic a lot lately when bringing EventBridge’s EventBus into some applications. On the current projects I’m working on with existing code, I’ve said 100 times, if EventBridge existed when I started them, I wouldn’t have so much SNS->SQS based code lying around. But such is life when working in evolving tech. Enter the EventBus Mesh
Category: Infrastructure
Canary Deployment for AWS Lambda
In life, when working on anything, small and iterative changes give us the best opportunity for feedback and learning. And it’s through that feedback and failure even that we get better. The same thing can be applied to building software. Small, iterative and independent deploys help us as builders understand if we’ve built the right thing and architected it correctly to handle the conditions asked of it. A technique called Canary Deployment is a popular model and the article below will demonstrate how to perform Canary Deployment for AWS Lambda
However, when deploying more frequently, we also need to do it safely. Shipping unfinished or potentially risky changes can have a big impact on our user base. No one wants to be in the middle of using your software only to be interrupted by a bad change. While we can’t be perfect in our ability to predict the impact or blast radius of a change, we can make it so that if the deploy shows signs of not being good, we can roll that change back without the need for human intervention.
Handling “Poison Pill” Messages with AWS Kinesis and Lambdas
Queues and streams are fundamentally different in how they handle readers consuming their information.
With an SQS Queue you can have many consumers but generally one consumer will win reading the message and in the event of success the message is purged from the queue or upon failure that message is returned back to the queue. It technically doesn’t get deleted, yet the its visibility property is changed. Hence why the VisibilityTimeout on the queue matters. If your code processes messages in more time than that property then you are going to get messages that constantly get put back on the queue for retry.
AWS CDK Pipeline
Deploying code (assets) into AWS has never been easier than it is right now. A few months back our engineering team made the decision to go all in on AWS CDK and with that included the need/desire for full pipeline automation. We’d been using a smattering of Python/Node, CloudFormation and CodeCommit plus CodePipeline code for all of our services and honestly it works fine once it’s set but getting it set per service became a pain. And honestly making modifications for idiosyncrasies for some of the services just was plain awful. So off we went and during that exploration phase we found the opinionated little construct called AWS CDK Pipelines. Below our walk through what it all meant for us.
Intro to CDK
AWS CDK (Cloud Developer Kit) is a new way to develop cloud infrastructure as it relates to AWS by brining your favorite programming language to apply abstractions on top of CloudFormation. This won’t be a super in-depth post on the tech and how to apply it (I’ll follow up with more articles later) but I’d like outline some of the benefits and reasons that you might consider your next feature’s infrastructure be coded up with it.