Austin Huminski
IOpipe Blog
Published in
4 min readFeb 25, 2019

--

[Editor’s note: Austin recently joined the IOpipe team as a Solution Architect. Look for more serverless technical tips and tricks from Austin as he helps developers deliver with confidence on serverless architectures.

Welcome to the team, Austin!]

How to Use AWS Lambda to Send High Volume Emails at Scale

Amazon SES is a fantastic way to send your high volume marketing emails at a competitive price, but how you go about implementing your solution with SES can be done in a multitude of ways.

As a developer, I prefer to rest easy and know things will just work. In this piece, I’ll show how you can use AWS SES, SQS, S3 and Lambda to send millions of email at scale.

The problem:

You have a nice little script you use to manually deploy your email campaigns. It’s running on an t2.small EC2. It grabs the HTML from S3, the email list from a MySQL database and inserts a unique unsubscribe link before it finally sends one email out at a time.

This is fine with a smaller-sized list of recipients, but over time as you grow out your list, you may hit a wall. You start finding the script is taking too long to send out all your emails.

OK, you could beef up your EC2 and maybe take advantage of some multiprocessing. What if you keep growing your email list though?

You’ll have to constantly monitor and keep provisioning stronger new EC2s to meet your send requirements. Also, unless you’re using spot instances, you don’t want this EC2 running 24/7, eating up monthly costs when you only deploy your mass emails a few times a week.

The Solution:

Instead of worrying about our email server, we can deploy Lambdas to do the work we need it to do and then shut it all down, paying for only the compute resources we use.

In this scenario, our user has already created the base HTML file we want to deploy and it is saved in S3. The user decides it’s time to deploy. They fill out some additional info on a simple front-end form, like the subject line and sender information, then hit the deploy button. On click of that deploy button, we query our database to get the desired email list and write it to a JSON file along with all the other information we need to send our email.

We’ll call this our detail_file:

{
"subject": "My Serverless Newsletter",
"html_file": "s3://my-bucket/newsletter_2019_02_22.html,
"sender": "newsletter@austinhuminski.com",
"emails": ["email1@gmail.com", "email2@yahoo.com"],
"EDITION_UUID": "55a5a6f3-fee1-4124-818e-a6bdfe8a35f8"
}

We save this file off to a private S3 bucket which triggers our process_detail_file Lambda. This function is in charge of splitting up the uploaded detail file into smaller files, chunking the email list. Since the email list of recipients could be in the hundreds of thousands, we split things up so we can take advantage of running our next Lambda function in parallel.

Each chunked file gets saved back to S3 bucket which will kick off the add_to_sqs_queue Lambda function.

Take our detail file and chunk it up into smaller files of email addresses and write new files back to S3

The purpose of the add_to_sqs_queue Lambda is to actually create the email object. This includes loading the HTML from S3 and creating a unique unsubscribe link for each email. After each individual email is created, we save it to an SQS queue called emails_to_send_queue. Saving messages to an SQS queue is a fantastic way to durably process information. By default, your messages stay in the queue for four days. If you have a problem processing the message, you can always go back and retry it later.

Each chunked file gets processed in parallel

Once messages are placed into the queue, our send_email Lambda function kicks off. Adding SQS as an event trigger for Lambda offloads a whole load of work that AWS just handles for you.

Based upon the number of messages on the queue, AWS will scale in or out the number of send_email Lambdas functions running for us. SQS was only recently introduced as an event trigger. Before, we would have to manually invoke our send_email Lambda and figure out exactly how many should run in parallel to send our emails out in a timely matter.

Lastly, we set up a configuration set in SES that tells AWS to send all my email tracking information (opens, clicks, sends ect) to another SQS queue.

Depending on how many emails on the queue, AWS will scale in or out how many send email Lambda functions will run

The last Lambda we put to work is the process_email_events function which will ship all of the messages from our email tracking SQS queue to Redshift.

And that’s how we can send high volume emails while being completely serverless. At the end of the day we end up with the diagram below.

This was a very high level explanation and not all of the details of the final diagram are discussed here at length. In addition, there was a lot of trial and error before getting this right.

Soon to come will be another article where I list out some more of the details and things learned throughout creating this architecture.

--

--