Deploying Elixir to AWS Elastic Beanstalk with Docker

This is my experience deploying an Elixir Phoenix app on Elastic Beanstalk with Docker. There are still plenty of areas for investigation and improvement, some of which I note at the end. I will do my best to isolate each part of this post into sections, such that if you’re doing Docker and Elastic Beanstalk, or Docker and Phoenix, you can focus only on the information you need. There are some thoughts towards the end of the post about why we chose this deployment toolchain.

Docker

Dockerfile

I will walk through the steps for setting up Docker for Phoenix based on our project. In your application directory, create a Dockerfile.

# Set the Docker image you want to base your image off.
# I chose this one because it has Elixir preinstalled.
FROM trenpixster/elixir:1.3.0

# Setup Node - Phoenix uses the Node library `brunch` to compile assets.
# The official node instructions want you to pipe a script from the
# internet through sudo. There are alternatives:
# https://www.joyent.com/blog/installing-node-and-npm
RUN curl -sL https://deb.nodesource.com/setup_5.x | sudo -E bash - && apt-get install -y nodejs

# Install other stable dependencies that don't change often

# Compile app
RUN mkdir /app
WORKDIR /app

# Install Elixir Deps
ADD mix.* ./
RUN MIX_ENV=prod mix local.rebar
RUN MIX_ENV=prod mix local.hex --force
RUN MIX_ENV=prod mix deps.get

# Install Node Deps
ADD package.json ./
RUN npm install

# Install app
ADD . .
RUN MIX_ENV=prod mix compile

# Compile assets
RUN NODE_ENV=production node_modules/brunch/bin/brunch build --production
RUN MIX_ENV=prod mix phoenix.digest

# Exposes this port from the docker container to the host machine
EXPOSE 4000

# The command to run when this image starts up
CMD MIX_ENV=prod mix ecto.migrate && \
  MIX_ENV=prod mix phoenix.server

Docker caches each command in a layer and reruns them only when needed. Therefore, by moving the more stable instructions higher up we can cache some of the more expensive operations like installing Node or Elixir dependencies. The application code changes more frequently and so should be towards the end.

.dockerignore

Add a .dockerignore file in to the root of your project. Any files that match the patterns from here will be ignored by the Docker ADD command. This is a good place to put any local development artifacts. Here is a sample.

/deps
/_build
ecl_Crash.dump
/node_modules
/priv/static/*
/uploads/files/*
.git
.gitignore

docker build

Follow the official Docker installation instructions. I used the new Docker for Mac, which worked great.

Build your image by running docker build -t project_name . in the root of your project that contains the Dockerfile. Replace the placeholder project_name with something meaningful to your project. It will download the base image and then apply each command in your Dockerfile. Once the build completes successfully, if you rerun the command it will run super fast because of the Docker caching. Running docker images will print out the built images on your machine.

REPOSITORY      TAG     IMAGE ID        CREATED         SIZE
project_name    latest  b85865f180ad    5 minutes ago   1.554 GB

docker run

To start up a new container using your image run the following.

docker run -p 4000 --rm \
  --name project_name_development -i -t project_name

The -p 4000 option publishes the port we exposed in our Dockerfile. The --rm will remove any existing container running with the same name which was useful while I was developing.

Now your application is either running, or it fell over because it was missing some environment variables required to start. There are lots of ways to inject environment variables into a Docker container. The next section is about Elastic Beanstalk. I’ll be using the Elastic Beanstalk CLI to manage our environment variables. If you’re not using Elastic Beanstalk, you can look up the -e and --env-file options for docker run.

Elastic Beanstalk

eb init

Elastic Beanstalk is a Heroku-like PaaS from Amazon. Follow the official Elastic Beanstalk CLI installation instructions. Once installed you need to configure the tool by using eb init where you select an AWS region and enter your access key and secret token. When you’re asked to select a platform, choose Docker. Enable the SSH option and create a key. Your project should now be configured.

eb local

As I mentioned above, there are lots of ways to manage your environment and environment variables. The eb local command lets you manage your Docker application locally using the same interface you would use to manage a remote eb environment. There are other tools out there like Docker Compose that help you manage an environment. I stuck with eb local rather than introducing a new tool.

To configure your local environment variables, adapt the following:

eb local setenv HOST=http://localhost:4000 PORT=4000

You can check that it worked by running eb local printenv.

Finally, let’s run the application with eb local run --port 4000 which will start up our Docker container with all of the environment variables set. It should start smoothly and be accessible at http://localhost:4000.

eb create

The next step is to create an environment in Elastic Beanstalk. My Phoenix application required a database so I configured it with the CLI.

eb create \
  --database \
  -db.engine postgres \
  -db.i db.t2.small \
  -db.size 10 \
  -db.version 9.4.5 \
  --envvars MIX_ENV=prod,SECRET_KEY_BASE=prettyprettygood,PORT=4000

It’s best to create the database at this step so that it injects the environment variables before your application tries to compile and run. I ran into a strange situation where our application was failing without the required database parameters. When I tried to add a database to the environment, it failed and rolled back because the application image was compiled before the database environment variables were injected. Configuring the database in our eb create command fixed that problem.

Dockerrun.aws.json

We need to add the mapping of the ports between the EC2 host and the Docker container.

{
  "AWSEBDockerrunVersion": 1,
  "volumes": [
    {
      "name": "elixir-app",
      "host": {
        "sourcePath": "/app"
      }
    }
  ],
  "containerDefinitions": [
    {
      "name": "elixir-app",
      "essential": true,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 4000
        }
      ]
    }
  ]
}

SSH Keys

When you view the AWS Elastic Beanstalk dashboard and click on the configuration of your environment, you’ll notice that only a single key pair can be set for SSHing into the instances managed by Elastic Beanstalk. This is not ideal but something you should be aware of. You can read how others have worked around it on this StackOverflow thread.

Phoenix

config/prod.exs

I mentioned a number of quirks with environment variables above. Here is the config file that reads most of the environment variables and points correctly to the asset manifest file.

use Mix.Config

config :project_name, ProjectName.Endpoint,
  http: [port: {:system, "PORT"}, compress: true],
  url: [scheme: "http", host: System.get_env("HOST"), port: {:system, "PORT"}],
  secret_key_base: System.get_env("SECRET_KEY_BASE"),
  code_reloader: false,
  cache_static_manifest: "priv/static/manifest.json",
  server: true

config :project_name, ProjectName.Repo,
  adapter: Ecto.Adapters.Postgres,
  database: System.get_env("RDS_DB_NAME"),
  username: System.get_env("RDS_USERNAME"),
  password: System.get_env("RDS_PASSWORD"),
  hostname: System.get_env("RDS_HOSTNAME"),
  port: System.get_env("RDS_PORT") || 5432,
  pool_size: 20,
  ssl: true

config :logger, level: :info

Elastic Beanstalk and WebSockets

Our project uses Phoenix WebSockets but this information is useful for any application using WebSockets on AWS. For WebSockets to work, you must ensure that the Elastic Load Balancers listeners are forwarding all TCP traffic and not just HTTP traffic on port 80.

You could edit the setting directly in the web console but it is better to update the actual configuration file of your project. To do this, run eb config which will load the configuration from AWS that specs out your environment. Search for aws:elb:listener and you should see some entries already. Update the InstanceProtocol to TCP. Below is our modified config.

aws:elb:listener:80:
  InstancePort: '80'
  InstanceProtocol: TCP
  ListenerEnabled: 'true'
  ListenerProtocol: TCP
  PolicyNames: null
  SSLCertificateId: null

When you save and close the file, Elastic Beanstalk will update your environment to match the new settings.

Debugging

Inevitably, something will go wrong and you’ll want to gather more information. Here are the steps I went through as I tried to debug problems.

  1. Try eb logs
  2. Try eb ssh to get into the EC2 machine
  3. sudo -s on the EC2 machine to run Docker commands and attach to the instance
  4. docker ps will list the running containers
  5. docker exec -i -t container_name /bin/bash will connect you to the container in a Bash shell

Final Thoughts

Toolchain selection

The maintainers of the project going forward had mentioned Ansible as their currently trending tool of choice for automating deployments. Doing some due diligence I found some Phoenix roles and began to experiment with an AWS deployment direct to EC2 instances.

The most popular role had a problem in that it assumed it should copy root’s authorized_keys file. On EC2, credentials tend to be setup under another user such as ubuntu or ec2-user. It could not be easily remedied with a configuration option so customisation would have been required and I felt that the result would have been a bit confusing and harder to maintain. There was already a new major version of the role coming that significantly changed the way it worked.

Why Docker?

My next plan was Docker. Because of the unusual external dependencies of our project we had had some issues ramping on new people to the project. Creating a Docker image seemed like a promising way to eliminate that headache for designers and developers alike. I came across trenpixster/elixir which builds on top of phusion/baseimage. Adding the customisations needed for the project would be extremely clear to maintainers going forward. Building on Docker felt like a safe bet.

Why Elixir and Phoenix?

You can read about why thoughtbot loves Elixir here.

Next Steps

  1. Tools like Docker Compose or AWS Cloud Formation seem like promising next steps to automate the steps outlined in this blog post
  2. Reuse the Docker image on CI
  3. Configure WebSockets to work over SSL