How to Deploy from GitLab to AWS Fargate

28 Dec 2017
Tagged
Technical

By Curt Grimes
11-minute read

Web Captioner now runs on AWS Elastic Container Service (ECS) and Fargate, services by Amazon that allow you to deploy a Dockerized application without having to configure servers. This post explains how I deploy the Web Captioner application to a AWS Fargate task type using GitLab.

The end result

With GitLab and AWS, I can make one-click deployments to my staging and production environments. I can independently deploy different branches of code to each environment. With minimal effort, it’s also possible for me to create an additional stack (load balancer and all), and deploy to that.

Screenshot of GitLab deployment options

AWS also provides alerts if my application ever becomes unavailable (or the number of concurrent instances of my application falls below a threshold) and some cool graphs. Graphs are always cool.

Screenshot of AWS target group metrics

AWS Fargate and Elastic Container Service

AWS Fargate is a new technology in the Amazon Web Services Elastic Container Service that allows you to run a Dockerized application without having to provision virtual servers. In conjunction with other AWS services, you can:

  • Configure multiple instances of your application to run concurrently for redundancy
  • Use a load balancer to distribute traffic among multiple instances
  • Run constant health checks on instances, and if an instance fails, start a new one in its place
  • Automatically scale the number of running instances up or down depending on CPU usage

I decided to use AWS ECS and Fargate for Web Captioner because of the redundancy and high availability it provides. It also abstracts away just enough of the work of server management so that I can spend more time on application development.

Preparing the AWS stack

The AWS stack includes the following resources that you will need to set up.

  1. An Amazon ECS cluster
  2. A service that belongs to the cluster that runs Fargate type tasks.
  3. An Amazon Elastic Container Registry where Docker images will be stored
  4. A task definition that references a Docker image stored in your registry and defines CPU and memory requirements for that image. The service uses this task definition to start one to many running instances, called tasks.
  5. An application load balancer that redirects requests to healthy targets in a target group. My load balancer is listening on ports 80 and 443 and redirecting all traffic on those ports to one target group where one to many Web Captioner application tasks are running.
  6. A target group. My application is simple, so I only have one target group answering all types of requests. AWS Fargate abstracts away much of the work of dealing with a target group. Behind the scenes, Fargate tasks are running on EC2 instances that are members of this target group.

Using the cluster creation wizard

A good way for getting all this set up is by following Amazon’s cluster creation wizard in the AWS console. It’ll create these resources and link them all together, but there’s certain things you won’t be able to change after they’re created, like the naming of some resources. To get around that, you can use CloudFormation to create an entire stack.

Making the entire stack with CloudFormation

It’s difficult to recreate a stack you’ve made with the wizard (for example, if you want to have staging and production environments), so I use this Web Captioner CloudFormation stack template (JSON) template for easily creating an entire new instance of my stack.

Note that there are references to Web Captioner in here (search for “webcaptioner”) that you will need to change. When you create this stack, it also asks for the name of an existing task definition (so you will need to have a task definition already created). You’ll need to set 80 and 443 (or some other ports) as listening ports in your load balancer after the stack is created, or maybe add a certificate from AWS Certificate Manager if you’re going to use a domain name. If you use this template, treat it as a starting point and customize it to fit your needs.

webcaptioner-stack-template.json

You could also try creating your own CloudFormation stack based on the stack created by the cluster creation wizard, but you’ll have to do some tweaking to get it working in a repeatable way. The CloudFormation template above is the result of my tweaking to get something that works well for Web Captioner.

Deploying from GitLab

My application is a Node.js application in a Docker container that exposes itself on port 8080. AWS’s application load balancer takes care of routing traffic from ports 80 and 443 to the container’s port 8080. Before you continue, you’ll want to make sure your application runs in a container and exposes itself on a single port. To keep things straight when configuring the load balancer I’m exposing a port that isn’t 80 or 443.

gitlab-ci.yml

My gitlab-ci.yml file looks like this:

gitlab-ci.yml

 1 2 3 4 5 6 7 8 91011121314151617181920212223242526272829303132333435363738
image: docker:latest

services:
- docker:dind

stages:
- build
- deploy

before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com

Build:
  stage: build
  script:
    - docker build --pull -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

Staging:
  stage: deploy
  services:
    - docker:dind
  environment:
    name: staging
    url: https://staging.webcaptioner.com
  script:
    - source scripts/deploy.sh

Production:
  stage: deploy
  services:
    - docker:dind
  environment:
    name: production
    url: https://webcaptioner.com
  script:
    - source scripts/deploy.sh
  when: manual

Let’s take a closer look at this.

12345
image: docker:latest

services:
- docker:dind
 

These lines let us use the docker-in-docker executor on Gitlab.com, which gives us access to docker and docker-compose in our CI scripts.

6789
stages:
- build
- deploy
 
We’ve got a build stage and a deploy stage in each pipeline:
Screenshot of GitLab deployment options


101112
before_script:
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
 

We are logging into GitLab’s container registry. During the build stage, after we build our Docker image, we will push it to this registry. Later during deployment the image gets pushed to the AWS elastic container service (ECS) registry.

131415161718
Build:
  stage: build
  script:
    - docker build --pull -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
 

Our build stage will happen every time we push a new commit. $CI_REGISTRY_IMAGE and $CI_COMMIT_REF_NAME are special GitLab CI variables that get replaced respectively with the path to the GitLab registry tied to the current project and the current commit’s branch name.

My app has a Dockerfile in the root, so these commands will build the application, tag it something like registry.gitlab.com/curtgrimes/webcaptioner:master (if I was committing to the master branch in the webcaptioner project), and then push it to the Docker registry I’ve enabled in GitLab for this project:

Screenshot of GitLab deployment options

1920212223242526272829303132333435363738
Staging:
  stage: deploy
  services:
    - docker:dind
  environment:
    name: staging
    url: https://staging.webcaptioner.com
  script:
    - source scripts/deploy.sh

Production:
  stage: deploy
  services:
    - docker:dind
  environment:
    name: production
    url: https://webcaptioner.com
  script:
    - source scripts/deploy.sh
  when: manual

These next two sections define two deploy stage jobs - one for deploying to a staging environment, and one for deploying to my production environment. They run a deploy.sh script that I’ll get to in a minute. On the Production job, when: manual prevents this job from running automatically — but I can start it on any commit if I wish. GitLab has more information about the when keyword in CI configurations.

deploy.sh

I’ve added these secret variables in GitLab under Settings > CI / CD > Secret variables:

Screenshot of GitLab secret variables

Learn more about getting your AWS access key and AWS secret access key that will let you use the AWS command line interface. I’d suggest installing the command line interface and playing around with it to make sure you can get it connected to your AWS environment outside of GitLab CI scripts. For convenience, I’ve also put AWS_REGION in here as a variable. (As of this writing, Fargate is only available in US East, N. Virginia).

Here is the deploy.sh script run by both the Staging and Production jobs:

deploy.sh

 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536
# Install AWS Command Line Interface
# https://aws.amazon.com/cli/
apk add --update python python-dev py-pip
pip install awscli --upgrade

docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME

# Set AWS config variables used during the AWS get-login command below
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY

# Log into AWS docker registry
# The `aws ecr get-login` command returns a `docker login` command with
# the credentials necessary for logging into the AWS Elastic Container Registry
# made available with the AWS access key and AWS secret access keys above.
# The command returns an extra newline character at the end that needs to be stripped out.
$(aws ecr get-login --no-include-email --region $AWS_REGION | tr -d '\r')

# Push the updated Docker container to the AWS registry.
# Using the $CI_ENVIRONMENT_SLUG variable provided by GitLab, we can use this same script
# for all of our environments (production and staging). This variable equals the environment
# name defined for this job in gitlab-ci.yml.
docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME $AWS_REGISTRY_IMAGE:$CI_ENVIRONMENT_SLUG
docker push $AWS_REGISTRY_IMAGE:$CI_ENVIRONMENT_SLUG

# The AWS registry now has our new container, but our cluster isn't aware that a new version
# of the container is available. We need to create an updated task definition. Task definitions
# always have a version number. When we register a task definition using a name that already
# exists, AWS automatically increments the previously used version number for the task
# definition with that same name and uses it here. Note that we also define CPU and memory
# requirements here and give it a JSON file describing our task definition that I've saved
# to my repository in a aws/ directory.
aws ecs register-task-definition --family webcaptioner-$CI_ENVIRONMENT_SLUG	 --requires-compatibilities FARGATE --cpu 256 --memory 512 --cli-input-json file://aws/webcaptioner-task-definition-$CI_ENVIRONMENT_SLUG.json --region $AWS_REGION

# Tell our service to use the latest version of task definition.
aws ecs update-service --cluster webcaptioner-$CI_ENVIRONMENT_SLUG --service webcaptioner --task-definition webcaptioner-$CI_ENVIRONMENT_SLUG --region $AWS_REGION

You’ll notice multiple uses of $CI_ENVIRONMENT_SLUG above. In our Staging environment, this value becomes “staging”. In Production, it is set to “production”. Again, these values come from the environments defined in gitlab-ci.yml:

1920212223242526
Staging:
  stage: deploy
  services:
    - docker:dind
  environment:
    name: staging
    url: https://staging.webcaptioner.com
  script:

So when we run aws ecs update-service --cluster webcaptioner-$CI_ENVIRONMENT_SLUG in deploy.sh above, we’re updating the service of the cluster named webcaptioner-staging or webcaptioner-production.

Task Definition

deploy.sh above uses a task definition JSON file stored in our code repository to register a new version of a task definition every time our container updates. Even though the definition doesn’t have to have changes, we still need to register a new version so that we tell our service (which can run one to many concurrent tasks made using the task definition) that a new version of the task definition exists.

Since this task definition has some references specific to your stack (like a staging or production stack), you’ll have to have a different task definition for each environment. Here is my production task definition:

webcaptioner-task-definition-production.json

 1 2 3 4 5 6 7 8 9101112131415161718192021222324252627282930313233343536
{
  "volumes": [],
  "family": "webcaptioner",
  "executionRoleArn": "arn:aws:iam::096015811855:role/ecsTaskExecutionRole",
  "networkMode": "awsvpc",
  "containerDefinitions": [
    {
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/webcaptioner",
          "awslogs-region": "us-east-1",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "portMappings": [
        {
          "hostPort": 8080,
          "protocol": "tcp",
          "containerPort": 8080
        }
      ],
      "cpu": 0,
      "memoryReservation": 300,
      "volumesFrom": [],
      "image": "096015811855.dkr.ecr.us-east-1.amazonaws.com/webcaptioner:production",
      "name": "webcaptioner",
      "environment": [
        {
          "name": "HUGO_BASE_URL",
          "value": "https://webcaptioner.com"
        }
      ]
    }
  ]
}

Some things to note here: The image value has the path to my AWS container registry and says to use the image tagged production (which happened when I pushed the image in line 24 of deploy.sh). The environment array lets you define environment variables for your Docker container. In my case, I’m setting a configuration value for Hugo, the static site generator I use to create this blog and Web Captioner’s Help Center. Also note that 8080 is the port that my container exposes itself on.

Running it all

In GitLab, you’ll be able to watch the job’s progress and see output similar to this:

GitLab deploy log

After the deploy is done, the cluster running in AWS will replace its running instances of tasks with the new version. It will take a few minutes for it to switch over and for autoscaling to automatically stop the old instances (depending on how you have autoscaling configured in your cluster). I have my cluster set up to always have a minimum of two task instances running, and you can see here that the old version (7) of the task runs while the new version (8) is starting up.

AWS cluster tasks pending

When your new task becomes available, the cluster immediately begins draining connections to the old version of the tasks in preparation of removing the task. The Events tab will show you the status of this:

AWS cluster events tab

If all has gone well, you should be able to access your application using the DNS name given to your load balancer:

AWS load balancer DNS

Running Web Captioner via the ELB DNS


(The HTTPS warning here is due to the fact that my application redirects all HTTP requests to HTTPS and the DNS name provided by ELB does not have a certificate registered. But in production you wouldn’t distribute the ELB name — you would have your own DNS name create an A record that points to your load balancer.)

Questions, Support, and Comments

Got questions about how I’m deploying from GitLab to AWS? Feel free to comment below or message me.

For questions about Web Captioner, the Help Center answers some commonly asked questions and the Web Captioner Users Group on Facebook is a great place to get help from the Web Captioner community. Like Web Captioner on Facebook to be notified of new updates and upcoming features. If you’ve got an idea for something you’d like to see Web Captioner do, let’s hear about it!

Recent Posts