How to Deploy Elixir Releases with Ansible

Prakash Venkatraman ·

In my last post, I described how to generate a platform-specific Elixir release. Now, the only thing left to do is to put it on the world wide web.

To follow along with this post, you’ll need a few things:

  1. An IP address for a remote machine (preferably running Linux) you want to deploy your application to.
  2. An RSA keypair, with the public key placed on that remote machine. Read more about how to do this here.
  3. Ansible (on your local machine). Read more about Ansible here.

The Plan

This post is going to cover a non-zero downtime deployment. That means that during its course, your app will be offline for a brief amount of time (as we deploy and restart). The BEAM allows for hot-reloads, but we aren’t going to cover that functionality in this post. Even Ansible has ways to do rolling updates, but they require infrastructure that is a bit more complex than the one used in this tutorial.

That being said, to consider our deployment automation a success, it needs to be able to do the following:

  • Copy our local release artifact to a remote machine.
  • Deploy any auxiliary services (like a database instance) and make them accessible to our release application.
  • Ensure that our deployment is idempotent, because we’ll be using it to ship code to the box over and over again.

Simple is best?

One way to accomplish this, the easiest way, is through plain ol’ SSH. SCP is a time-tested tool that allows for uploading files through a secure tunnel. Assuming you’ve placed the public key of your RSA key-pair on your remote machine, you should be able to do the following:

These commands take care of goal 1, but they ignore 2 and 3 completely. Our app does not have a database available to it. There also isn’t much we can re-use — to redeploy, we’ll need to SSH back into the box, stop the application, delete the release, and then re-copy everything to the box from our local machine.

While simple is good, too simple can be a headache. To follow the plan, we’ll need to automate the configuration of the remote machine itself.

Configuration Management

To ensure that our application behaves the way we expect it to, we need to control the environment it runs in. We’ll need to build it from scratch, provision and seed a database, copy the release, run it, and test for uptime. Each one of those commands needs to be idempotent because when the time comes to ship a new build, we’ll have to tear everything down and do it all again.

Enter, Ansible

Ansible is a tool we can use to replicate a working environment on our remote machine. Named after the sci-fi device from Ursula K. Le Guin’s novels, it is a tool that wraps SSH commands in abstractions called “modules” so that we can run complex commands remotely and programmatically.

A Brief Intro to Ansible

The Ansible binary (that ships when you install Ansible) runs commands on remote machines through the use of composable pieces of software called modules. The composition itself happens in a YAML file called a playbook. You can see samples of the playbooks I use in this tutorial in the associated repo. But as a brief introduction, this is what a playbook could look like:

Here’s what’s going on:

  1. This block tells Ansible which hosts to run the playbook against. Our tutorial doesn’t cover managing different tasks for different machines, so you can ignore this for now.
  2. This is the user role Ansible assumes when they run the task. The playbooks I use assume the root user, but take care to create another user if you want to observe certain permissions on your target host.
  3. This is the name that Ansible will use to refer to your task. This is not the name of the module.
  4. This is the actual name of the module. All of the modules used in this tutorial can be found here.
  5. Arguments for the module can be passed in as a key-value list.
  6. Finally, use the register keyword to assign the output of the task to a variable, so it can be referenced later.

Your playbooks can be placed anywhere in your project. The only time the path to them is important is when you use ansible-playbook to run them (covered below).

Let’s get started.

System setup

Add the IP of your target machine to the /etc/ansible/hosts file. This tutorial assumes your target machine is running a Debian Linux distribution, so the following sections will reference the Debian package manager APT. If your target machine is non-Linux, you can still use Ansible, but you might need to either search for a third party module that can install packages or write one yourself. But assuming you are working with a Debian machine, let’s use the apt module to set up our machine:

Let’s also install pip, the Python package manager, since we’ll need it for the next step:

The Postgres Instance

Now that our machine has been provisioned with the basics, we need to install Postgres. The apt module will, again, do nicely:

(You can include these Postgres dependencies as a part of the first apt call if you want, but I chose to separate them because I found it easier to read.)

Now, to interact with our Postgres instance, we’ll need a driver. Since Ansible is written in Python, it works with Python libraries best. Let’s use the psycopg2 package and install it with pip:


Next, we’ll need to securely include sensitive credentials (in this case, our database username and password)

To do that, we’re going to use a plugin called lookup. To use the looked-up value later on, we need to store it and make it available to rest of our pipeline. Ansible calls these stored values “facts” — to create one, use the set_fact module:

(We’re looking for values in environment variables, so make sure they are set on your local machine).


This approach could get annoying if you plan on deploying from more than one machine (since you might not always have the same environment). Ansible Vault is an alternative, but, unfortunately, out of this post’s scope.

User and Database

Now that we have an instance and credentials, we can create a user and associate it with an actual database.

Two modules will come in handy here, postgresql_user and postgresql_db:

Let’s break down what’s going on here. The postgresql_user is doing a couple of things:

  1. Specifying credentials (these keys were set as facts in the previous step).
  2. Assigning roles to the new user (in this case, the Superuser and Creation roles).
  3. Declaring a user on the target machine (every Postgres instance comes with a default “postgres” user) to assume and
  4. Assuming that user’s privileges.

Next, we actually create the database, using #3 and #4 from above. Together, these two tasks allow us to access the database from our application, provided we use the right username and password.

A note about AWS

If you are using an AWS EC2 instance to host your database, you may want to provision a permanent data store like EBS. EC2 instances will lose all of their data each time they are stopped — and they could stop and restart at any time. 


Next, we need to move our artifact to our box:

Here’s the breakdown

  1. We check to see if the release was properly created locally, and store its state in a variable.
  2. If the above check fails, stop everything.
  3. If it passes, clean out the remote release, and recreate the directory
  4. Copy and unarchive the release and then
  5. Check to see that it was properly copied.
  6. Finally we echo the status of the remote release artifact.

If Step 6 passes, you’ve successfully deployed your app! Now we can do one of two things: apply a migration, or start it up.

Optional: Migrations

Once we have the database up and running and an application artifact to play with, we have the option of applying migrations. Have a look at this playbook:

Here’s what’s going on:

  1. First off, we check if Postgres is running, and store its “up” status in a variable.
  2. If postgres is not running, we fail out.
  3. If it is, we check to see if the release artifact exists.
  4. Fail if the artifact does not exist for some reason.
  5. If it does exist, we run a command. This command references a module that was packaged into my release. The ReleaseTasks module is the following:

Keep in mind this snippet assumes the use of Ecto. If you aren’t using Ecto, feel free to replace this code with another script that runs your migrations.


Once our database, artifact, and possible migrations are good to go, we can start our application on our box:

Here’s what’s happening. Do you notice a pattern?

  1. We check to make sure the release artifact exists.
  2. We start it as a background process and register the output as a variable.
  3. Then, dump the output to stdout.

Admittedly this last part is not as elegant as I’d like it to be, but it is a good way of visualizing what’s going on as the box is running your program. If you have suggestions on how to improve this, feel free to email me.

Now, if you get to Step 3 and see a successful output in your console, your application is officially running on the internet!


In the beginning of this post I mentioned that our deployment would have some downtime. Here is where that downtime comes into play. Should you ever need to re-deploy, and you will, you will first need to stop your application. The teardown could be as follows:

Classic breakdown:

  1. We check to see if the release artifact exists on our box.
  2. If so, we run the stop command on the release and store its output into a variable.
  3. We clean out the release directory.
  4. We dump the stop command output to stdout.

If Step 4 is successful, you’ve successfully torn everything down and made the machine ready for a future deployment.

If you’re looking for a zero-downtime deployment, shoot me an email and I’ll do some digging around how to tweak Ansible to fit your needs. You can also look here.

Putting it All Together

The Facts

At this point, you should have six playbooks:

  1. System Setup
  2. Database Creation
  3. Deployment
  4. Migrations
  5. Startup
  6. Teardown

A lot of these rely on the same system facts, namely:

  • release artifact directory
  • release artifact path

We can put all of these facts into a file that will be available to every module:

You can reference this file before any playbook that needs to access global project facts.

Directory Structure

Now that you have your facts files, you can simplify the rest of your playbooks into the following structure:

Each of the playbooks in the deploy/ directory references a facts file as well as a task. It might take looking at actual code for this to gel. Feel free to browse the repo to see what the files themselves look like.

Mix Aliases: Mask the Ugliness

Each time you deploy, your workflow will likely look something like this:

  1. Deploy
  2. Run Migrations
  3. Start up

Ansible ships with a tool called ansible-playbook that you can use to run these commands individually:

But… that’s a lot of typing isn’t it? Why not hide the long commands with a mix alias?

Create two shell scripts:


Make them executable with chmod +x. Then, add this to your mix.exs file:

Once you have those aliases, deploying your app is as simple as:

Lo, and behold, with two commands, you have made your app available to the world!

Thanks for reading. Peruse the code behind this post here, and feel free to email me with questions/suggestions at

Module Glossary

Here’s a quick reference for all the modules we used

Illustration by Nicole Thayer.

Interested in more software development tips & insights? Visit the development section on our blog!

What can we help you with?

Tell us a bit about your project, or just shoot us an email.

Interested in a Career at Carbon Five? Check out our job openings.