How to Deploy Elixir Releases with Ansible

Prakash Venkatraman ·

In my last post, I described how to generate a platform-specific Elixir release. Now, the only thing left to do is to put it on the world wide web.

To follow along with this post, you’ll need a few things:

  1. An IP address for a remote machine (preferably running Linux) you want to deploy your application to.
  2. An RSA keypair, with the public key placed on that remote machine. Read more about how to do this here.
  3. Ansible (on your local machine). Read more about Ansible here.

The Plan

This post is going to cover a non-zero downtime deployment. That means that during its course, your app will be offline for a brief amount of time (as we deploy and restart). The BEAM allows for hot-reloads, but we aren’t going to cover that functionality in this post. Even Ansible has ways to do rolling updates, but they require infrastructure that is a bit more complex than the one used in this tutorial.

That being said, to consider our deployment automation a success, it needs to be able to do the following:

  • Copy our local release artifact to a remote machine.
  • Deploy any auxiliary services (like a database instance) and make them accessible to our release application.
  • Ensure that our deployment is idempotent, because we’ll be using it to ship code to the box over and over again.

Simple is best?

One way to accomplish this, the easiest way, is through plain ol’ SSH. SCP is a time-tested tool that allows for uploading files through a secure tunnel. Assuming you’ve placed the public key of your RSA key-pair on your remote machine, you should be able to do the following:


$ scp local/path/to/release/folder username@host_ip:remote/path/to/release/folder
$ ssh username@host_ip
<… authenticate …>
$ local/path/to/release/folder daemon

view raw

scp-release.sh

hosted with ❤ by GitHub


$ scp local/path/to/release/folder username@host_ip:remote/path/to/release/folder
$ ssh username@host_ip
<… authenticate …>
$ local/path/to/release/folder daemon

view raw

scp-release.sh

hosted with ❤ by GitHub

These commands take care of goal 1, but they ignore 2 and 3 completely. Our app does not have a database available to it. There also isn’t much we can re-use — to redeploy, we’ll need to SSH back into the box, stop the application, delete the release, and then re-copy everything to the box from our local machine.

While simple is good, too simple can be a headache. To follow the plan, we’ll need to automate the configuration of the remote machine itself.

Configuration Management

To ensure that our application behaves the way we expect it to, we need to control the environment it runs in. We’ll need to build it from scratch, provision and seed a database, copy the release, run it, and test for uptime. Each one of those commands needs to be idempotent because when the time comes to ship a new build, we’ll have to tear everything down and do it all again.

Enter, Ansible

Ansible is a tool we can use to replicate a working environment on our remote machine. Named after the sci-fi device from Ursula K. Le Guin’s novels, it is a tool that wraps SSH commands in abstractions called “modules” so that we can run complex commands remotely and programmatically.

A Brief Intro to Ansible

The Ansible binary (that ships when you install Ansible) runs commands on remote machines through the use of composable pieces of software called modules. The composition itself happens in a YAML file called a playbook. You can see samples of the playbooks I use in this tutorial in the associated repo. But as a brief introduction, this is what a playbook could look like:


– hosts: all # 1
remote_user: user_name # 2
tasks:
– name: <task name> # 3
module_name: # 4
module_arg1: "foo" # 5
register: my_output # 6

Here’s what’s going on:

  1. This block tells Ansible which hosts to run the playbook against. Our tutorial doesn’t cover managing different tasks for different machines, so you can ignore this for now.
  2. This is the user role Ansible assumes when they run the task. The playbooks I use assume the root user, but take care to create another user if you want to observe certain permissions on your target host.
  3. This is the name that Ansible will use to refer to your task. This is not the name of the module.
  4. This is the actual name of the module. All of the modules used in this tutorial can be found here.
  5. Arguments for the module can be passed in as a key-value list.
  6. Finally, use the register keyword to assign the output of the task to a variable, so it can be referenced later.

Your playbooks can be placed anywhere in your project. The only time the path to them is important is when you use ansible-playbook to run them (covered below).

Let’s get started.

System setup

Add the IP of your target machine to the /etc/ansible/hosts file. This tutorial assumes your target machine is running a Debian Linux distribution, so the following sections will reference the Debian package manager APT. If your target machine is non-Linux, you can still use Ansible, but you might need to either search for a third party module that can install packages or write one yourself. But assuming you are working with a Debian machine, let’s use the apt module to set up our machine:


– name: install system packages
apt:
update_cache: yes
state: present
name:
– gcc
– g++
– curl
– wget
– unzip
– git
– python-dev
– python-apt
– make
– automake
– autoconf
– libreadline-dev
– libncurses-dev
– libssl-dev
– libyaml-dev
– libxslt-dev
– libffi-dev
– libtool
– unixodbc-dev

Let’s also install pip, the Python package manager, since we’ll need it for the next step:


# system-setup.yml
– name: install pip
apt:
update_cache: yes
state: present
name: python-pip

The Postgres Instance

Now that our machine has been provisioned with the basics, we need to install Postgres. The apt module will, again, do nicely:


# postgres.yml
– name: install postgres + postgres packages
apt:
update_cache: yes
state: present
name:
– postgresql
– postgresql-contrib
– libpq-dev

view raw

postgres.yml

hosted with ❤ by GitHub

(You can include these Postgres dependencies as a part of the first apt call if you want, but I chose to separate them because I found it easier to read.)

Now, to interact with our Postgres instance, we’ll need a driver. Since Ansible is written in Python, it works with Python libraries best. Let’s use the psycopg2 package and install it with pip:


# postgres.yml
– name: install psycopg2
pip:
name: psycopg2

view raw

postgres-2.yml

hosted with ❤ by GitHub

Credentials

Next, we’ll need to securely include sensitive credentials (in this case, our database username and password)

To do that, we’re going to use a plugin called lookup. To use the looked-up value later on, we need to store it and make it available to rest of our pipeline. Ansible calls these stored values “facts” — to create one, use the set_fact module:


# postgres-facts.yml
– when: "database_name is not defined"
name: "compute database name"
set_fact:
database_name: "{{ lookup('env', 'DATABASE_NAME') }}"
– name: set database host
set_fact:
database_host: "{{ lookup('env', 'DATABASE_HOST') }}"
– name: create or get postgres password
set_fact:
database_password: "{{ lookup('env', 'DATABASE_PASSWORD') }}"
– name: set database user
set_fact:
database_user: "{{ lookup('env', 'DATABASE_USER') }}"

(We’re looking for values in environment variables, so make sure they are set on your local machine).

N.B.

This approach could get annoying if you plan on deploying from more than one machine (since you might not always have the same environment). Ansible Vault is an alternative, but, unfortunately, out of this post’s scope.

User and Database

Now that we have an instance and credentials, we can create a user and associate it with an actual database.

Two modules will come in handy here, postgresql_user and postgresql_db:


# postgres.yml
– name: create postgres user
postgresql_user:
name: "{{database_user}}" # 1
password: "{{database_password}}" # 1
role_attr_flags: CREATEDB,SUPERUSER # 2
state: present
become_user: postgres # 3
become: yes # 4
– name: create database
postgresql_db:
name: "{{database_name}}" # 1
encoding: "UTF-8"
become_user: postgres # 3
become: yes # 4

view raw

postgres-3.yml

hosted with ❤ by GitHub

Let’s break down what’s going on here. The postgresql_user is doing a couple of things:

  1. Specifying credentials (these keys were set as facts in the previous step).
  2. Assigning roles to the new user (in this case, the Superuser and Creation roles).
  3. Declaring a user on the target machine (every Postgres instance comes with a default “postgres” user) to assume and
  4. Assuming that user’s privileges.

Next, we actually create the database, using #3 and #4 from above. Together, these two tasks allow us to access the database from our application, provided we use the right username and password.

A note about AWS

If you are using an AWS EC2 instance to host your database, you may want to provision a permanent data store like EBS. EC2 instances will lose all of their data each time they are stopped — and they could stop and restart at any time. 

Deployment

Next, we need to move our artifact to our box:


# deploy-release.yml
# 1
– name: check to see if release archive exists locally
stat:
path: "{{ release_archive_path }}"
register: release_stat
delegate_to: 127.0.0.1
# 2
– name: fail if no local release
fail:
msg: "Local release tarball not found. Copy it to {{ release_archive_path }}."
when: not release_stat.stat.exists
# 3
– name: clean remote release directory
file:
path: "{{remote_release_dir}}"
state: absent
– name: create remote release directory
file:
path: "{{remote_release_dir}}"
state: directory
# 4
– name: unarchive release on remote server
unarchive:
src: "{{release_archive_path}}"
dest: "{{remote_release_dir}}"
# 5
– name: check to see if release artifact exists remotely
stat:
path: "{{remote_release_artifact_path}}"
register: remote_release_artifact_stat
# 6
– name: echo end
debug:
var: remote_release_artifact_stat.stat.exists

view raw

deploy.yml

hosted with ❤ by GitHub

Here’s the breakdown

  1. We check to see if the release was properly created locally, and store its state in a variable.
  2. If the above check fails, stop everything.
  3. If it passes, clean out the remote release, and recreate the directory
  4. Copy and unarchive the release and then
  5. Check to see that it was properly copied.
  6. Finally we echo the status of the remote release artifact.

If Step 6 passes, you’ve successfully deployed your app! Now we can do one of two things: apply a migration, or start it up.

Optional: Migrations

Once we have the database up and running and an application artifact to play with, we have the option of applying migrations. Have a look at this playbook:


# run-migrations.yml
# 1
– name: check if postgres is running
command: "/etc/init.d/postgresql status"
register: postgres_status
# 2
– fail:
msg: "Postgres is not running"
when: postgres_status.stderr != "" or postgres_status.failed != false
# 3
– name: check to see if release artifact exists remotely
stat:
path: "{{remote_release_artifact_path}}"
register: remote_release_artifact_stat
# 4
– fail:
msg: "No remote release artifact"
when: not remote_release_artifact_stat.stat.exists
# 5
– name: run migrations on remote server
command: "{{remote_release_artifact_path}} eval 'ReleaseTasks.migrate'"
when: remote_release_artifact_st.stat.exists

Here’s what’s going on:

  1. First off, we check if Postgres is running, and store its “up” status in a variable.
  2. If postgres is not running, we fail out.
  3. If it is, we check to see if the release artifact exists.
  4. Fail if the artifact does not exist for some reason.
  5. If it does exist, we run a command. This command references a module that was packaged into my release. The ReleaseTasks module is the following:


# release_tasks.ex
defmodule ReleaseTasks do
def migrate do
{:ok, _} = Application.ensure_all_started(:app)
Ecto.Migrator.run(
Api.Repo,
path("priv/repo/migrations"),
:up,
all: true
)
# Close process
:init.stop()
end
end

Keep in mind this snippet assumes the use of Ecto. If you aren’t using Ecto, feel free to replace this code with another script that runs your migrations.

Startup

Once our database, artifact, and possible migrations are good to go, we can start our application on our box:


# up.yml
# 1
– name: check to see if release artifact exists remotely
stat:
path: "{{remote_release_artifact_path}}"
register: remote_release_artifact_stat
# 2
– name: start remote server
command: "{{remote_release_artifact_path}} daemon"
when: remote_release_artifact_stat.stat.exists
register: foo
# 3
– name: echo end
debug:
var: foo

view raw

up.yml

hosted with ❤ by GitHub

Here’s what’s happening. Do you notice a pattern?

  1. We check to make sure the release artifact exists.
  2. We start it as a background process and register the output as a variable.
  3. Then, dump the output to stdout.

Admittedly this last part is not as elegant as I’d like it to be, but it is a good way of visualizing what’s going on as the box is running your program. If you have suggestions on how to improve this, feel free to email me.

Now, if you get to Step 3 and see a successful output in your console, your application is officially running on the internet!

Teardown

In the beginning of this post I mentioned that our deployment would have some downtime. Here is where that downtime comes into play. Should you ever need to re-deploy, and you will, you will first need to stop your application. The teardown could be as follows:


# down.yml
# 1
– name: check to see if release artifact exists remotely
stat:
path: "{{remote_release_artifact_path}}"
register: remote_release_artifact_stat
# 2
– name: stop remote server
command: "{{remote_release_artifact_path}} stop"
when: remote_release_artifact_stat.stat.exists
register: stop_cmd
# 3
– name: clean remote release directory
file:
path: "{{remote_release_dir}}"
state: absent
# 4
– name: echo end
debug:
var: stop_cmd

view raw

down.yml

hosted with ❤ by GitHub

Classic breakdown:

  1. We check to see if the release artifact exists on our box.
  2. If so, we run the stop command on the release and store its output into a variable.
  3. We clean out the release directory.
  4. We dump the stop command output to stdout.

If Step 4 is successful, you’ve successfully torn everything down and made the machine ready for a future deployment.

If you’re looking for a zero-downtime deployment, shoot me an email and I’ll do some digging around how to tweak Ansible to fit your needs. You can also look here.

Putting it All Together

The Facts

At this point, you should have six playbooks:

  1. System Setup
  2. Database Creation
  3. Deployment
  4. Migrations
  5. Startup
  6. Teardown

A lot of these rely on the same system facts, namely:

  • release artifact directory
  • release artifact path

We can put all of these facts into a file that will be available to every module:


# project-facts.yml
– name: set app name
set_fact:
app_name: api
– name: set app version
set_fact:
app_version: "0.1.0"
– name: set credentials directory path
set_fact:
credentials_dir: "~/credentials/"
– name: set release name
set_fact:
release_name: "{{app_name}}-{{app_version}}"
– name: set release directory name
set_fact:
release_dir: "../rel/artifacts/"
– name: set release archive path
set_fact:
release_archive_path: "{{release_dir}}{{release_name}}.tar.gz"
– name: set remote release directory
set_fact:
remote_release_dir: "~/rel/artifacts/"
– name: set remote release archive path
set_fact:
remote_release_archive_path: "{{remote_release_dir}}{{release_name}}.tar.gz"
– name: set remote release artifact path
set_fact:
remote_release_artifact_path: "{{remote_release_dir}}opt/build/_build/prod/rel/api/bin/api"

You can reference this file before any playbook that needs to access global project facts.

Directory Structure

Now that you have your facts files, you can simplify the rest of your playbooks into the following structure:


~/project/deploy/
— facts/
—- project-facts.yml
—- postgres-facts.yml
— tasks/
—- system-setup.yml
—- postgres.yml
—- deploy-release.yml
—- run-migrations.yml
—- up.yml
—- down.yml
— create-db.yml
— deploy.yml
— migrations.yml
— startup.yml
— teardown.yml

Each of the playbooks in the deploy/ directory references a facts file as well as a task. It might take looking at actual code for this to gel. Feel free to browse the repo to see what the files themselves look like.

Mix Aliases: Mask the Ugliness

Each time you deploy, your workflow will likely look something like this:

  1. Deploy
  2. Run Migrations
  3. Start up

Ansible ships with a tool called ansible-playbook that you can use to run these commands individually:


$ ansible-playbook deploy/deploy.yml
$ ansible-playbook deploy/migrations.yml
$ ansible-playbook deploy/startup.yml

But… that’s a lot of typing isn’t it? Why not hide the long commands with a mix alias?

Create two shell scripts:


#! /usr/bin/env bash
ansible-playbook deploy/deploy.yml

view raw

deploy.sh

hosted with ❤ by GitHub

and


#! /usr/bin/env bash
ansible-playbook deploy/startup.yml

view raw

up.sh

hosted with ❤ by GitHub

Make them executable with chmod +x. Then, add this to your mix.exs file:


# mix.exs
defp aliases do
[
deploy: ["cmd ./path/to/deploy.sh"],
up: ["cmd ./path/to/up.sh"],
down: ["cmd ./path/to/down.sh"]
]
end

view raw

mix.exs

hosted with ❤ by GitHub

Once you have those aliases, deploying your app is as simple as:


$ mix deploy
$ mix up
# For tear down…
$ mix down

view raw

workflow

hosted with ❤ by GitHub

Lo, and behold, with two commands, you have made your app available to the world!

Thanks for reading. Peruse the code behind this post here, and feel free to email me with questions/suggestions at prakash@carbonfive.com.

Module Glossary

Here’s a quick reference for all the modules we used

Illustration by Nicole Thayer.


Interested in more software development tips & insights? Visit the development section on our blog!