In my last post, I described how to generate a platform-specific Elixir release. Now, the only thing left to do is to put it on the world wide web.
To follow along with this post, you’ll need a few things:
This post is going to cover a non-zero downtime deployment. That means that during its course, your app will be offline for a brief amount of time (as we deploy and restart). The BEAM allows for hot-reloads, but we aren’t going to cover that functionality in this post. Even Ansible has ways to do rolling updates, but they require infrastructure that is a bit more complex than the one used in this tutorial.
That being said, to consider our deployment automation a success, it needs to be able to do the following:
One way to accomplish this, the easiest way, is through plain ol’ SSH. SCP is a time-tested tool that allows for uploading files through a secure tunnel. Assuming you’ve placed the public key of your RSA key-pair on your remote machine, you should be able to do the following:
$ scp local/path/to/release/folder username@host_ip:remote/path/to/release/folder | |
$ ssh username@host_ip | |
<… authenticate …> | |
$ local/path/to/release/folder daemon |
$ scp local/path/to/release/folder username@host_ip:remote/path/to/release/folder | |
$ ssh username@host_ip | |
<… authenticate …> | |
$ local/path/to/release/folder daemon |
These commands take care of goal 1, but they ignore 2 and 3 completely. Our app does not have a database available to it. There also isn’t much we can re-use — to redeploy, we’ll need to SSH back into the box, stop the application, delete the release, and then re-copy everything to the box from our local machine.
While simple is good, too simple can be a headache. To follow the plan, we’ll need to automate the configuration of the remote machine itself.
To ensure that our application behaves the way we expect it to, we need to control the environment it runs in. We’ll need to build it from scratch, provision and seed a database, copy the release, run it, and test for uptime. Each one of those commands needs to be idempotent because when the time comes to ship a new build, we’ll have to tear everything down and do it all again.
Ansible is a tool we can use to replicate a working environment on our remote machine. Named after the sci-fi device from Ursula K. Le Guin’s novels, it is a tool that wraps SSH commands in abstractions called “modules” so that we can run complex commands remotely and programmatically.
The Ansible binary (that ships when you install Ansible) runs commands on remote machines through the use of composable pieces of software called modules. The composition itself happens in a YAML file called a playbook. You can see samples of the playbooks I use in this tutorial in the associated repo. But as a brief introduction, this is what a playbook could look like:
– hosts: all # 1 | |
remote_user: user_name # 2 | |
tasks: | |
– name: <task name> # 3 | |
module_name: # 4 | |
module_arg1: "foo" # 5 | |
register: my_output # 6 |
Here’s what’s going on:
register
keyword to assign the output of the task to a variable, so it can be referenced later.Your playbooks can be placed anywhere in your project. The only time the path to them is important is when you use ansible-playbook
to run them (covered below).
Let’s get started.
Add the IP of your target machine to the /etc/ansible/hosts
file. This tutorial assumes your target machine is running a Debian Linux distribution, so the following sections will reference the Debian package manager APT. If your target machine is non-Linux, you can still use Ansible, but you might need to either search for a third party module that can install packages or write one yourself. But assuming you are working with a Debian machine, let’s use the apt module to set up our machine:
— | |
– name: install system packages | |
apt: | |
update_cache: yes | |
state: present | |
name: | |
– gcc | |
– g++ | |
– curl | |
– wget | |
– unzip | |
– git | |
– python-dev | |
– python-apt | |
– make | |
– automake | |
– autoconf | |
– libreadline-dev | |
– libncurses-dev | |
– libssl-dev | |
– libyaml-dev | |
– libxslt-dev | |
– libffi-dev | |
– libtool | |
– unixodbc-dev |
Let’s also install pip, the Python package manager, since we’ll need it for the next step:
# system-setup.yml | |
– name: install pip | |
apt: | |
update_cache: yes | |
state: present | |
name: python-pip |
Now that our machine has been provisioned with the basics, we need to install Postgres. The apt
module will, again, do nicely:
# postgres.yml | |
— | |
– name: install postgres + postgres packages | |
apt: | |
update_cache: yes | |
state: present | |
name: | |
– postgresql | |
– postgresql-contrib | |
– libpq-dev |
(You can include these Postgres dependencies as a part of the first apt call if you want, but I chose to separate them because I found it easier to read.)
Now, to interact with our Postgres instance, we’ll need a driver. Since Ansible is written in Python, it works with Python libraries best. Let’s use the psycopg2
package and install it with pip:
# postgres.yml | |
– name: install psycopg2 | |
pip: | |
name: psycopg2 |
Next, we’ll need to securely include sensitive credentials (in this case, our database username and password)
To do that, we’re going to use a plugin called lookup. To use the looked-up value later on, we need to store it and make it available to rest of our pipeline. Ansible calls these stored values “facts” — to create one, use the set_fact module:
# postgres-facts.yml | |
– when: "database_name is not defined" | |
name: "compute database name" | |
set_fact: | |
database_name: "{{ lookup('env', 'DATABASE_NAME') }}" | |
– name: set database host | |
set_fact: | |
database_host: "{{ lookup('env', 'DATABASE_HOST') }}" | |
– name: create or get postgres password | |
set_fact: | |
database_password: "{{ lookup('env', 'DATABASE_PASSWORD') }}" | |
– name: set database user | |
set_fact: | |
database_user: "{{ lookup('env', 'DATABASE_USER') }}" |
(We’re looking for values in environment variables, so make sure they are set on your local machine).
This approach could get annoying if you plan on deploying from more than one machine (since you might not always have the same environment). Ansible Vault is an alternative, but, unfortunately, out of this post’s scope.
Now that we have an instance and credentials, we can create a user and associate it with an actual database.
Two modules will come in handy here, postgresql_user and postgresql_db:
# postgres.yml | |
– name: create postgres user | |
postgresql_user: | |
name: "{{database_user}}" # 1 | |
password: "{{database_password}}" # 1 | |
role_attr_flags: CREATEDB,SUPERUSER # 2 | |
state: present | |
become_user: postgres # 3 | |
become: yes # 4 | |
– name: create database | |
postgresql_db: | |
name: "{{database_name}}" # 1 | |
encoding: "UTF-8" | |
become_user: postgres # 3 | |
become: yes # 4 |
Let’s break down what’s going on here. The postgresql_user
is doing a couple of things:
Next, we actually create the database, using #3 and #4 from above. Together, these two tasks allow us to access the database from our application, provided we use the right username and password.
If you are using an AWS EC2 instance to host your database, you may want to provision a permanent data store like EBS. EC2 instances will lose all of their data each time they are stopped — and they could stop and restart at any time.
Next, we need to move our artifact to our box:
# deploy-release.yml | |
— | |
# 1 | |
– name: check to see if release archive exists locally | |
stat: | |
path: "{{ release_archive_path }}" | |
register: release_stat | |
delegate_to: 127.0.0.1 | |
# 2 | |
– name: fail if no local release | |
fail: | |
msg: "Local release tarball not found. Copy it to {{ release_archive_path }}." | |
when: not release_stat.stat.exists | |
# 3 | |
– name: clean remote release directory | |
file: | |
path: "{{remote_release_dir}}" | |
state: absent | |
– name: create remote release directory | |
file: | |
path: "{{remote_release_dir}}" | |
state: directory | |
# 4 | |
– name: unarchive release on remote server | |
unarchive: | |
src: "{{release_archive_path}}" | |
dest: "{{remote_release_dir}}" | |
# 5 | |
– name: check to see if release artifact exists remotely | |
stat: | |
path: "{{remote_release_artifact_path}}" | |
register: remote_release_artifact_stat | |
# 6 | |
– name: echo end | |
debug: | |
var: remote_release_artifact_stat.stat.exists |
Here’s the breakdown
If Step 6 passes, you’ve successfully deployed your app! Now we can do one of two things: apply a migration, or start it up.
Once we have the database up and running and an application artifact to play with, we have the option of applying migrations. Have a look at this playbook:
# run-migrations.yml | |
— | |
# 1 | |
– name: check if postgres is running | |
command: "/etc/init.d/postgresql status" | |
register: postgres_status | |
# 2 | |
– fail: | |
msg: "Postgres is not running" | |
when: postgres_status.stderr != "" or postgres_status.failed != false | |
# 3 | |
– name: check to see if release artifact exists remotely | |
stat: | |
path: "{{remote_release_artifact_path}}" | |
register: remote_release_artifact_stat | |
# 4 | |
– fail: | |
msg: "No remote release artifact" | |
when: not remote_release_artifact_stat.stat.exists | |
# 5 | |
– name: run migrations on remote server | |
command: "{{remote_release_artifact_path}} eval 'ReleaseTasks.migrate'" | |
when: remote_release_artifact_st.stat.exists |
Here’s what’s going on:
ReleaseTasks
module is the following:
# release_tasks.ex | |
defmodule ReleaseTasks do | |
def migrate do | |
{:ok, _} = Application.ensure_all_started(:app) | |
Ecto.Migrator.run( | |
Api.Repo, | |
path("priv/repo/migrations"), | |
:up, | |
all: true | |
) | |
# Close process | |
:init.stop() | |
end | |
end |
Keep in mind this snippet assumes the use of Ecto. If you aren’t using Ecto, feel free to replace this code with another script that runs your migrations.
Once our database, artifact, and possible migrations are good to go, we can start our application on our box:
# up.yml | |
— | |
# 1 | |
– name: check to see if release artifact exists remotely | |
stat: | |
path: "{{remote_release_artifact_path}}" | |
register: remote_release_artifact_stat | |
# 2 | |
– name: start remote server | |
command: "{{remote_release_artifact_path}} daemon" | |
when: remote_release_artifact_stat.stat.exists | |
register: foo | |
# 3 | |
– name: echo end | |
debug: | |
var: foo |
Here’s what’s happening. Do you notice a pattern?
Admittedly this last part is not as elegant as I’d like it to be, but it is a good way of visualizing what’s going on as the box is running your program. If you have suggestions on how to improve this, feel free to email me.
Now, if you get to Step 3 and see a successful output in your console, your application is officially running on the internet!
In the beginning of this post I mentioned that our deployment would have some downtime. Here is where that downtime comes into play. Should you ever need to re-deploy, and you will, you will first need to stop your application. The teardown could be as follows:
# down.yml | |
— | |
# 1 | |
– name: check to see if release artifact exists remotely | |
stat: | |
path: "{{remote_release_artifact_path}}" | |
register: remote_release_artifact_stat | |
# 2 | |
– name: stop remote server | |
command: "{{remote_release_artifact_path}} stop" | |
when: remote_release_artifact_stat.stat.exists | |
register: stop_cmd | |
# 3 | |
– name: clean remote release directory | |
file: | |
path: "{{remote_release_dir}}" | |
state: absent | |
# 4 | |
– name: echo end | |
debug: | |
var: stop_cmd |
Classic breakdown:
If Step 4 is successful, you’ve successfully torn everything down and made the machine ready for a future deployment.
If you’re looking for a zero-downtime deployment, shoot me an email and I’ll do some digging around how to tweak Ansible to fit your needs. You can also look here.
At this point, you should have six playbooks:
A lot of these rely on the same system facts, namely:
We can put all of these facts into a file that will be available to every module:
# project-facts.yml | |
— | |
– name: set app name | |
set_fact: | |
app_name: api | |
– name: set app version | |
set_fact: | |
app_version: "0.1.0" | |
– name: set credentials directory path | |
set_fact: | |
credentials_dir: "~/credentials/" | |
– name: set release name | |
set_fact: | |
release_name: "{{app_name}}-{{app_version}}" | |
– name: set release directory name | |
set_fact: | |
release_dir: "../rel/artifacts/" | |
– name: set release archive path | |
set_fact: | |
release_archive_path: "{{release_dir}}{{release_name}}.tar.gz" | |
– name: set remote release directory | |
set_fact: | |
remote_release_dir: "~/rel/artifacts/" | |
– name: set remote release archive path | |
set_fact: | |
remote_release_archive_path: "{{remote_release_dir}}{{release_name}}.tar.gz" | |
– name: set remote release artifact path | |
set_fact: | |
remote_release_artifact_path: "{{remote_release_dir}}opt/build/_build/prod/rel/api/bin/api" |
You can reference this file before any playbook that needs to access global project facts.
Now that you have your facts files, you can simplify the rest of your playbooks into the following structure:
~/project/deploy/ | |
— facts/ | |
—- project-facts.yml | |
—- postgres-facts.yml | |
— tasks/ | |
—- system-setup.yml | |
—- postgres.yml | |
—- deploy-release.yml | |
—- run-migrations.yml | |
—- up.yml | |
—- down.yml | |
— create-db.yml | |
— deploy.yml | |
— migrations.yml | |
— startup.yml | |
— teardown.yml |
Each of the playbooks in the deploy/
directory references a facts file as well as a task. It might take looking at actual code for this to gel. Feel free to browse the repo to see what the files themselves look like.
Each time you deploy, your workflow will likely look something like this:
Ansible ships with a tool called ansible-playbook
that you can use to run these commands individually:
$ ansible-playbook deploy/deploy.yml | |
$ ansible-playbook deploy/migrations.yml | |
$ ansible-playbook deploy/startup.yml |
But… that’s a lot of typing isn’t it? Why not hide the long commands with a mix alias?
Create two shell scripts:
#! /usr/bin/env bash | |
ansible-playbook deploy/deploy.yml |
and
#! /usr/bin/env bash | |
ansible-playbook deploy/startup.yml |
Make them executable with chmod +x
. Then, add this to your mix.exs
file:
# mix.exs | |
… | |
defp aliases do | |
[ | |
deploy: ["cmd ./path/to/deploy.sh"], | |
up: ["cmd ./path/to/up.sh"], | |
down: ["cmd ./path/to/down.sh"] | |
] | |
end |
Once you have those aliases, deploying your app is as simple as:
$ mix deploy | |
$ mix up | |
# For tear down… | |
$ mix down |
Lo, and behold, with two commands, you have made your app available to the world!
Thanks for reading. Peruse the code behind this post here, and feel free to email me with questions/suggestions at prakash@carbonfive.com.
Here’s a quick reference for all the modules we used
Illustration by Nicole Thayer.
Interested in more software development tips & insights? Visit the development section on our blog!