6 Lessons Learned Sending Mass Emails With AWS Lambda

Austin Huminski
IOpipe Blog
Published in
4 min readMay 8, 2019

--

In my previous post, “How to Use AWS Lambda to Send High Volume Emails at Scale,” I talked about how we could leverage AWS Lambda to send a high volume of emails at scale.

It wasn’t magic. In fact, it took many attempts to have this architecture work exactly the way I designed it. Although I went back and forth on different ideas, the great part about cloud architecture and AWS was that I was able to throw together a proof of concept with serverless tools easily and inexpensively.

I was able to start a service with a click in the console and play around with it to see if it was the right fit.

So, for those new to serverless architectures, here is a window into what went wrong, what went right, and what went random.

Dealing with time limits

SQS has a FIFO queue I was hoping to use for its exactly-once processing. This would make sure an individual email would be sent only once to a recipient. FIFO queues, however, don’t provide as much throughput as a regular queue. My Lambda function was creeping up on a five-minute limit with larger email lists. If the limit was hit, Lambda would attempt to retry, which means I would be duplicating email messages on my queue. I eventually had to switch from the FIFO queue back to a regular queue so my messages could get placed on the queue quick enough (Read below to see how I suppressed duplicates).

At this point, I also decided to split up my larger detail file into smaller chunks. Splitting out the main detail file allowed us to distribute the work to multiple lambdas at once, getting everything done much faster and within our five-minute time limit. In classic fashion, about two months after this went live, the time limit for Lambda increased to 15 minutes. However, I’m happy I made the change anyway, since it’s a much more resilient design.

Making sure emails don’t get sent twice

The main email application in this example has a task scheduler running as a background process and is responsible for executing deploys of an email at a certain date and time.

One very bad day, this scheduler decided to kickoff the same task twice. It uploaded the detail_file.json to S3 once, and then five minutes later, it did it again. This meant our email got sent our twice to the same audience. Not ideal for spam scores.

I needed to make sure that, regardless of where things break, at no point in time can we have this type of issue happen again. It needed to be completely idempotent.

AWS Elasticache for Redis to the rescue! As part of the send_email Lambda, before each send we do a check against the Elasticache Redis cluster to see if the key combo of email recipient and newsletter UUID already exists. If it does, we skip over it. If not, then we send the email and set that email as sent in Redis, so it won’t be delivered twice.

Understanding your SQS settings will save headaches later

It was imperative to understand all the settings of my SQS queue. Most important was the visibility timeout. This is the amount of time our Lambda has to process the message before it becomes visible again to our other Lambdas. When our send email Lambda function receives a message from a queue, the message remains in the queue. Once you’re done with the message, you are responsible for deleting it off the queue so it does not get consumed again. Not understanding this could lead to a lot of headaches while debugging any issues.

Using S3 event trigger options

It’s less commonly known that you can trigger different Lambda functions based upon the prefix or suffix of the file you drop in S3. I wanted to leverage the same S3 bucket that I had set up throughout the whole send process since it was already locked down. When I uploaded a detail file, the naming convention was detail_file_<UUID of edition>.json. So, I was able to trigger my process_detail_file Lambda function on any files starting with detail_file, and I was able to trigger my add_to_sqs_queue Lambda function with any file that ended in *chunk.json. It allowed me to get each email editions info contained in the same spot.

Putting old files to pasture

You don’t want always want lists of emails sitting on a standard S3 bucket forever. You can set up a lifecycle policy to delete these files or move them off to Glacier after a certain time.

Save yourself time with existing serverless frameworks

I was pretty early into my days of learning serverless patterns and techniques when I created this architecture. There are a bunch of frameworks out there that would have made my life a lot easier. For those new to serverless, these frameworks allow you to define your serverless architecture as code, local testing, and much more.

Some of the few I’ve used personally include Serverless Framework, AWS SAM, AWS Chalice and Zappa. Zappa is really cool if you’re using serverless python. In a former role, I had to throw together an internal application within two weeks, and Zappa allowed me to throw up my new Django app really quickly without having to worry about provisioning any EC2s.

If you’re interested in learning more about serverless in action, be sure to attend our next serverless webinar with Comic Relief and AWS on May 25.

To get real-time visibility into the most granular behavior of your serverless application, try IOpipe for free here.

--

--