Part 2: How Far Out is AWS Fargate?

Michael Lavers
IOpipe Blog
Published in
12 min readSep 17, 2019

--

Getting started tutorial with a simple Flask app

In part one, I made the case that AWS Fargate is both a compliment to AWS Lambda and a simpler alternative to Kubernetes. But nothing makes a better case than getting our hands dirty. In this blog post, I’ll introduce a CLI tool that is Fargate’s equivalent to Kubernetes’ kubectl, and we’ll deploy a simple Flask app to demonstrate how easy it is to get started with Fargate.

Prerequisites

Before we begin, we’re first going to get our environment set up to use AWS Fargate. You may already have some of these prerequisites in place, especially if you have worked with AWS before. If that’s the case, feel free to skip those steps.

First things first, if you don’t have an AWS account, you will first need to create one. After that, you’ll need to create an administrator user. Once you have an admin user, you will then want to generate and download credentials for that user and create a shared credentials file containing your user’s credentials. This setup is handy because all of AWS’s tools will look for a shared credentials file by default.

As mentioned in part one, Fargate orchestrates containers. As such, we’ll need a way to create containers. For this example, we’re going to use Docker. So let’s install that next.

Last but not least, we’ll need to install Go. This is because the Fargate CLI tool is written in Go.

Installing the Fargate CLI

Now I know that I promised using Fargate was easy. So, if your head is spinning from the prerequisites above—it’s all downhill from here.

To install the Fargate CLI, run:

$ go get -u github.com/awslabs/fargatecli

And then check that the installation was successful:

$ fargatecli --version
fargate version 0.3.2

If you get a “command not found” error, you may need to restart your terminal. Assuming the above command works, we’re all set. Let’s see what this tool has to offer:

$ fargatecli --help

The above command will give us the following description:

fargate is a command-line interface to deploy containers to AWS Fargate that makes it easy to run containers in AWS as one-off tasks or managed, highly available services secured by free TLS certificates. It bundles the power of AWS including Amazon Elastic Container Service (ECS), Amazon Elastic Container Registry (ECR), Elastic Load Balancing, AWS Certificate Manager, Amazon CloudWatch Logs, and Amazon Route 53 into an easy-to-use CLI.

And below that we can see that it groups its commands like so:

Available Commands:
certificate Manage certificates
help Help about any command
lb Manage load balancers
service Manage services
task Manage tasks

If we want to know more about a command’s subcommands:

$ fargatecli help service

Don’t worry about what each command does just yet. First let’s build a simple web app and then we’ll go over the commands we need to run to deploy it.

Creating the Web App

Our web app is going to be a simple Flask app. But your web app can be written in any language and using any framework that can run in a Docker container.

First we’ll want to create a directory for our web app:

$ mkdir fargate-web-app
$ cd fargate-web-app

Next, we’ll want to define our web app’s dependencies. In Python, a common practice is to create a requirements.txt file containing the names of our package dependencies.

Now that we have our dependencies sorted out, let’s create a app.py file to contain our web app and add the following code to it:

from flask import Flask

app = Flask(__name__)


@app.route("/")
def index():
return {"hello": "world"}


if __name__ == "__main__":
app.run(host="0.0.0.0")

This is a very basic web app that returns JSON when you make a GET / request. The last two lines tell Python that if we run the app.py script directly that we want Flask to fire up a web server for us. More on that in a moment.

Lastly, our web app needs to be able to live within a Docker container. In order to do that, we need to create a Dockerfile to tell Docker how our container should be built. Create a Dockerfile and add the following to it:

FROM python:3.7-alpine

WORKDIR /app

COPY requirements.txt /app/requirements.txt

RUN pip install --no-cache-dir -r requirements.txt

COPY app.py /app/app.py

EXPOSE 5000

CMD ["python", "app.py"]

The first line of our Dockerfile tells Docker that we want to start with a base image running Python 3.7 on Alpine Linux. Alpine Linux is a minimalist Linux distribution designed specifically to run in containers. The next line specifies a working directory. After that we then copy our requirements.txt to our container and install our dependencies using pip. We then copy our app.py to the container and run it. We also tell Docker that our container will be listening on port 5000, which is the Flask built-in web server default port.

If you’re wondering why we don’t just copy the requirements.txt and app.py in one copy step, it's because Docker caches each container build step into layers. This means that if we were to build our Docker container, for subsequent builds Docker will only build the layers that have changed instead of building all the layers all over again. So in our Dockerfile we add our requirements.txt and install our dependencies first, because our dependencies are likely to change much less frequently than our web app code. In this simple example structuring our Dockerfile this way offers only a small optimization to our build times, but imagine the impact for a web app that has hundreds of dependencies.

Building the Container

At this point we should have a fargate-web-app directory with a app.py, Dockerfile and requirements.txt file in it. Assuming you installed Docker, we can now build our container:

$ docker build -t fargate-web-app .

This command will start the build process, which will include downloading the base image. It may take a little while on first run, but every build after that should be much quicker thanks to layer caching. If after running this command you see the following:

Successfully tagged fargate-web-app:latest

Then our container build was successful. Before we deploy our container, let’s run it to make sure everything is working.

Running the Container

To run our web app container we need to specify the tag that was created during our build step. In this case the tag is fargate-web-app:latest. We can version our containers using tags similar to git tags. The latest tag defaults to our last build. To run our container:

$ docker run --rm -p 5000:5000 fargate-web-app:latest

Which should output the following:

* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Recall that in our app.py we had a couple lines to fire up a web server if we run the app.pydirectly as a script. In our Dockerfile we run it with:

CMD ["python", "app.py"]

The output we see when running our container is the web server built into Flask. Notice that Flask warns against using the built-in Flask web server in production. This is good advice. We’re only using Flask’s web server for this example. In a Python web app you’ll want to use a WSGI server like gunicorn or uwsgi.

The --rm flag tells Docker to clean up the container when we're done with it, and the -p 5000:5000 tells Docker to map port 5000 between the host and the container. To test that this is all working, we can fire up a new terminal and run:

If the above doesn’t work, you can use the following command to see what address the container is listening on:

$ docker ps

To exit our running container, return to the previous terminal and type CTRL-C.

If everything’s looking good by this point, then we’re ready to deploy.

Deploying the Container

In part one, I made the claim that we could deploy a web app in just a few commands. Andm despite all the commands we’ve run so far, it’s still true. No, really.

Thus far, this post may feel like a rather roundabout way to get to a Fargate deployment. But remember, while Fargate makes container orchestration easy, it doesn’t build the web app and subsequent container for us. So hopefully the above exercise served its intended purpose.

NOTE: From this point on, we will be creating AWS resources that aren’t free. Some of them fall under the free tier provided by AWS, but you’ll need to verify this within your AWS account. Please remember to follow the steps described in “Cleaning Up” to ensure that you don’t continue to get billed for resources we create here.

In part one we covered the three main components of Fargate, which are: clusters, services and tasks. A cluster can have one or more services and a service can have one or more tasks. There are also task definitions, which are blueprints for how a task should be configured. If you’re unsure on any of these terms, I cover them in more detail in part one.

To start we need a cluster. Conveniently the Fargate CLI will create a Fargate cluster for us if one doesn’t exist. So let’s create a service called “fargate-web-app” which will also create a Fargate cluster called “fargate”:

$ fargatecli service create --port http:5000 fargate-web-app

The --port http:5000 tells the Fargate CLI that the tasks running in our service will be listening on port 5000 and expect the HTTP protocol. Running the above command performs the following steps:

  1. Rebuilds our Docker container using the Dockerfile in our fargate-web-app directory.
  2. Creates an ECR (Elastic Container Registry) and pushes our container to it.
  3. Creates a Fargate cluster named “fargate” if one doesn’t already exist.
  4. Creates a “fargate-web-app” service in this cluster.
  5. Creates/configures subnets and security groups for this service.
  6. Creates a task definition with the protocol and port mapping described above. The task definition also specifies which container registry and tag should be used to pull our container image.
  7. Starts a task based on that task definition.

This may seem like a lot, but most of it only needs to happen once. From here, to deploy to our service we only need to:

  1. Push a new container to ECR.
  2. Create a new task definition.
  3. Restart tasks based on the new task definition.

And all three of these steps are handled by the Fargate CLI with a single command:

$ fargatecli service deploy fargate-web-app

Let’s take a closer look at the service we just created:

$ fargatecli service list
NAME IMAGE CPU MEMORY LOAD BALANCER DESIRED RUNNING PENDING
fargate-web-app XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/fargate-web-app:XXXXXXXXXXXXXX 256 512 1 1 0

For more detail:

$ fargatecli service info fargate-web-app
Service Name: fargate-web-app
Status:
Desired: 1
Running: 1
Pending: 0
Image: XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/fargate-web-app:XXXXXXXXXXXXXX
Cpu: 256
Memory: 512
Subnets: subnet-XXXXXXXX, subnet-XXXXXXXX, subnet-XXXXXXXX, subnet-XXXXXXXX
Security Groups: sg-XXXXXXXXXXXXXXXXX

Tasks
ID IMAGE STATUS RUNNING IP CPU MEMORY DEPLOYMENT
XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX 5XXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/fargate-web-app:XXXXXXXXXXXXXX running 1h7m11s 11.222.333.44 256 512 1

Deployments
ID IMAGE STATUS CREATED DESIRED RUNNING PENDING
1 XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/fargate-web-app:XXXXXXXXXXXXXX primary 2019-09-12 19:30:58 +0000 UTC 1 1 0

This tells us quite a bit about our new Fargate service. First it describes which container image our service is currently running, as well as how many tasks. It also displays the subnets and security groups in which the service is running. By default, the Fargate CLI will configure tasks to be assigned a public IP address. You can find this IP in the “Tasks” section under the “IP” column. If you wanted to make an HTTP request to this task, you could do something like this:

$ curl http://11.22.333.44:5000/
{"hello":"world"}

The IP assigned to your task will be different, obviously. But remember to specify the port 5000 as we configured our task to listen on that port. Having an assigned IP can be handy if we want to test a task directly, but what happens if we were to scale to two tasks?

$ fargatecli service scale fargate-web-app 2
[i] Scaled service fargate-web-app to 2

And rerun:

$ fargatecli service info fargate-web-app

We now have two IPs under the “Tasks” section. Again, this may be useful if we needed to be able to test a specific task directly. But what if we wanted to spread our HTTP requests across both of these tasks? That’s where a load balancer comes in, and Fargate CLI is here to help.

Load Balancing the Service

Fargate supports three kinds of load balancers: ELB (Elastic Load Balancer), ALB (Application Load Balancer) and NLB (Network Load Balancer). For our example, we’re going to use ALB to load balance HTTP requests across our two Fargate tasks.

To create an application load balancer with the Fargate CLI:

$ fargatecli lb create --port http:80 fargate-web-lb
[i] Created load balancer fargate-web-lb

One limitation of Fargate (at the time of this blog post) is that you can only specify a load balancer at Fargate service creation. This means that because we’ve already created a Fargate service, in order to use our newly created load balancer we need to recreate the service. Thankfully, Fargate CLI can help:

$ fargatecli service destroy fargate-web-app
[!] Cannot destroy service fargate-web-app
2 tasks running, scale service to 0

Ah — to make sure we intend to remove the service, Fargate requires us to scale its running tasks to zero. This ensures that we don’t accidentally delete a service with many running tasks. So let’s do that now:

$ fargatecli service scale fargate-web-app 0
[i] Scaled service fargate-web-app to 0

And try again:

$ fargatecli service destroy fargate-web-app
[i] Destroyed service fargate-web-app

Now let’s recreate the service, this time specifying that we want to use our newly created load balancer:

$ fargatecli service create --lb fargate-web-lb --port http:5000 fargate-web-app

This will run all the same steps that were previously run when we first created a service, but this time our service will be configured as a target for our load balancer. Once service creation is complete, we can inspect the service again with:

$ fargatecli service info fargate-web-app

And now we have a new section:

Load Balancer
Name: fargate-web-lb
DNS Name: fargate-web-lb-XXXXXXXXX.us-east-1.elb.amazonaws.com
Ports:
HTTP:80:
Rules: DEFAULT=

And if we want to test our new load balancer:

curl http://fargate-web-lb-XXXXXXXXX.us-east-1.elb.amazonaws.com
{"hello":"world"}

Nice. But remember that because we recreated our Fargate service, we’re back at one running task. So let’s scale back up to two tasks:

$ fargatecli service scale fargate-web-app 2
[i] Scaled service fargate-web-app to 2

And now our load balancer is balancing HTTP requests across two running tasks. How cool is that? But wait — how do we know it’s balancing requests?

Observing the Service

At this point, we now have a Fargate service with two running tasks and a load balancer balancing requests across them. In order to verify that everything is working, we can test our setup by first making some HTTP requests to our load balancer and then running:

$ fargatecli service logs fargate-web-app

This will output logs for both of our tasks. We can differentiate between them by their ID to verify that HTTP requests are being balanced as expected. We can get a list of our running tasks — including their IDs — with:

$ fargatecli service ps fargate-web-app

Cleaning Up

As mentioned before, the AWS resources we created are not free. So to clean them up, we can run the following:

$ fargatecli service scale fargate-web-app 0
[i] Scaled service fargate-web-app to 0

$ fargatecli service destroy fargate-web-app
[i] Destroyed service fargate-web-app

$ fargatecli lb destroy fargate-web-lb
[i] Destroyed load balancer fargate-web-lb

In Summation

As we can see, Fargate is really easy to get up and running, especially in combination with the Fargate CLI. In just a few commands, we were able to setup a Fargate service behind a load balancer. But that’s really only scratching the surface of what we can do with the Fargate CLI. Additional things you can do include:

  • Creating a free SSL certificate on AWS ACM and configuring ALB to do SSL termination using that certificate (see fargatecli certificate request and fargatecli lb createcommands for details).
  • Manage environment variables for running tasks (see fargatecli service env for details).
  • Run one-off Fargate tasks (see fargatecli task run and fargatecli task stop for details).
  • Scale tasks both horizontally and vertically (see fargatecli service scale and fargatecli service update for details).

And a lot more. For a full list of available Fargate CLI commands, check out the docs. And be sure to check out this Fargate CLI screencast.

If you’re using AWS Lambda in conjunction with Fargate, sign up for IOpipe’s real-time monitoring to observe application performance changes.

With no-code install options, function profiling, alerts, tracing, and custom metrics are available on our free-forever tier.

About IOpipe

As the leader in serverless dev tooling for monitoring and observability, IOpipe offers real-time visibility into the most granular behaviors of today’s serverless applications on AWS Lambda.

Founded in 2016 by Erica Windisch and Adam Johnson, IOpipe reduces debugging time from hours to seconds, delivers transparent insights into the behaviors and performance of your serverless functions, and reduces risk for enterprises shifting to serverless.

Working with global brands like Matson, Rackspace, and APM Music, IOpipe empowers engineering teams to deliver with confidence, debug intelligently, and get busy building the impossible.

In other words, IOpipe makes it a lot more fun to be a developer. Visit www.iopipe.com to learn more and try it for free.

--

--