Services and Serverless – Carbon Five LA Talk Night August 17th

By on in Announcements, Development, Docker, Events, Los Angeles, Microservices, Ops, Web

The micro-services push is on with developers writing simpler applications that interact with each other. But how do you deploy these services? Manage versions and discoverability? Learn two approaches from our August 17th Talk Night speakers as they cover using Docker or going completely server-less with Amazon Web Services’ Lambda!

First we’ll have Samuel Chow, Head of Mobile at Farmers Insurance, give an “Intro to Docker”:

Docker has become one of the hottest technologies in the industry. But what is Docker? Why do developers love it and why might you want to use it? We will cover how it works and introduce the Docker terminology and toolset.

Then Grindr’s Principal Applications Engineer Tom Bray walks us through “Going Serverless with AWS Lambda”:

Microservices got you down? Come learn how to implement Serverless architectures with AWS Lambda and API Gateway from someone who has done it in the real world. Get a glimpse of life beyond the operational overhead that Microservices require and discover the benefits of Serverless. Decrease time to market, reduce operational cost, and let AWS Lambda handle scaling for you automatically while you only pay for the compute you use.

Our doors will open at 6pm with pizza, drinks (including non-alcoholic options), and of course wi-fi provided. The talks will kick-off at 7pm, with Q&A interspersed throughout.

So sign up on Meetup and get ready to get some macro-knowledge on building micro-services!


Docker, Rails, & Docker Compose together in your development workflow

By on in Docker, Ops, Rails, Web

Docker on Rails

We’ve been trialing the usage of Docker and Docker Compose (previously known as fig) on a Rails project here at Carbon Five. In the past, my personal experience with Docker had been that the promise of portable containerized apps was within reach, but the tooling and development workflow were still awkward – commands were complex, configuration and linking steps were complicated, and the overall learning curve was high.

My team decided to take a peek at the current landscape of Docker tools (primarily boot2docker and Docker Compose) and see how easily we could spin up a new app and integrate it into our development workflow on Mac OS X.

In the end, I’ve found my experience with Docker tools to be surprisingly pleasant; the tooling easily integrates with existing Rails development workflows with only a minor amount of performance overhead. Docker Compose offers a seamless way to build containers and orchestrate their dependencies, and helps lower the learning curve to build Dockerized applications. Read on to find out how we built ours. Continue reading …


Node.js in Production

By on in Ops, Web

When running a node application in production, you need to keep stability, performance, security, and maintainability in mind. Outlined here is what I think are the best practices for putting node.js into production.

By the end of this guide, this setup will include 3 servers: a load balancer (lb) and 2 app servers (app1 and app2). The load balancer will health check and balance traffic between the servers. The app servers will be using a combination of systemd and node cluster to load balance and route traffic around multiple node processes on the server. Deploys will be a one-line command from the developer’s laptop and cause zero downtime or request failures.

It will look roughly like this:

Photo credit: Digital Ocean

Continue reading …


RethinkDB: a Qualitative Review

By on in Database, Ops, Web

RethinkDB Evaluation
At Carbon Five we install and use many different database engines. Document-oriented databases are proving to be a good fit for more and more of our projects. MongoDB is the most popular of these and provides a powerful set of tools to store and query data, but it’s been plagued by performance problems when used with very large databases or large cluster sizes. Riak is another interesting option that is built from the ground up to perform at scale. But Riak is difficult to set up and has a minimal API that requires a lot more work to manage the data. RethinkDB is a relative newcomer that wants to fill the gap between these.

The trade-off between developer friendliness and high performance is unavoidable, but I’ve been looking for something in the middle. RethinkDB claims to solve the 1-15 problem, which is a database that is reasonable to use as a single node, but can scale up to around 15 nodes with minimal configuration and no changes to the application. Whether or not this claim holds up remains to be seen. In this post I take it for a test drive and provide a qualitative assessment (i.e. no benchmarks) of its ease of use and effectiveness for application development. The question is, what do developers have to give up for the peace of mind of knowing they won’t have to rip out the persistence layer when the app gains popularity (tldr; not much).

Continue reading …


Using HAProxy with Socket.io and SSL

By on in Ops, Web

Donning my ops hat a bit over the last few months, I have learned a bit about HAProxy, Node.js, and Socket.io. I was pretty surprised by how little definitive information there was on what I was trying to do for one of our projects, and HAProxy can be pretty intimidating the first time around.

What and Why

  • Route all traffic through a load-balancing proxy in preparation for horizontal scale and splitting of services (i.e. X and Y axes on the AKF Scale Cube).
  • Support Socket.io’s websocket and flashsocket transports. Our application sends/receives many events to/from its clients and requires low latency for a great user experience; persistent sockets help make that happen. Sadly, IE (<10) only supports Flash sockets.
  • Use TLS/SSL for all traffic for security and to help push through finicky internet infrastructure. It’s surprising how many organizations have firewalls that disallow socket traffic. SSL traffic is often allowed through.
  • Terminate SSL at the proxy so that we don’t have to deal with certs and whatnot in the application.
  • Redirect all HTTP traffic to HTTPS

To make all of this happen, I cobbled together information from a few posts and dug into the documentation to fill in the gaps. We’ve been using our HAProxy configuration for a couple of months now and it’s working well.

Continue reading …


Deploying node.js on Amazon EC2

By on in Ops, Web

After nearly a month of beating my head against the wall that is hosted node.js stacks — with their fake beta invites and non-existent support — I decided it was time to take matters into my own hands. Amazon Web Service (AWS) offers 12 months of a micro instance for free (as in beer) with 10 GB of disk and 613 MB of memory. This is perfect for an acceptance server running node. All you need to do is sign up with a new email address and provide a credit card. Totally worth it. After 12 months, the price will jump to roughly $15 a month.

I’m a huge fan of Debian and it’s progeny Ubuntu. The guys over at http://www.alestic.com/ do a great job of providing Amazon Machine Images (ami) that are production ready. I choose to use Ubuntu 10.04 LTS because it will be supported until April of 2015. The 64 bit ami for the us-east region is ami-63be790a. Feel free to choose one that best suits your needs.
Continue reading …


Think Globally, Stage Locally

By on in Ops

Or: how to create and deploy to a staging environment running locally!

Staging: an environment that duplicates production as closely as possible to find any lingering bugs before you update production. Most of the Rails community develops on OSX but deploys to Linux; this can be fragile since it is common to forget Linux-specific environment changes necessary for your app. At Carbon Five, most of our customers can afford to maintain a dedicated staging environment but for smaller projects, I wanted to have my own Linux staging environment without the cost of a real slice or EC2 instance.

In this post, I will show you how to create a Linux VM with Vagrant and use capistrano to deploy to your vagrant VM. My coworker Jared recently posted an nice intro to Vagrant, a great project by Mitchell Hashimoto to simplify and automate the use of virtual machines during development. You should read his post first as I’m not going to cover Chef, which I highly recommend for automating the provisioning of your VM. Using Chef means that your staging and production boxes can be virtually identical by using the exact same recipes to build them.

Let’s assume you have a Rails 3 app for which you want to create a staging VM. We’ll install Vagrant and configure it in the project like so:

  cd myapp
  # NOTE: Make sure you've installed VirtualBox first!
  gem install vagrant
  # Downloads a blank Ubuntu 11.04 64-bit image
  # Or find your own box on http://vagrantbox.es
  vagrant box add ubuntu-1104-server-amd64 http://dl.dropbox.com/u/7490647/talifun-ubuntu-11.04-server-amd64.box
  # Adds a Vagrantfile to your Rails app which talks to the new image
  vagrant init ubuntu-1104-server-amd64
  # Starts the new VM
  vagrant up
  # Adds SSH details to your SSH config so Capistrano can deploy directly to your VM
  vagrant ssh-config >> ~/.ssh/config
  # Logs into your new VM
  vagrant ssh
  # Perform a lot of Chef recipe work
  # ...Left as an exercise to the reader...

This whole process, minus the box download, should take a minute or two. Remember that you will need to do a bunch of Chef work to install the stack your application needs (e.g. Unicorn, JRuby, etc). Once that that is done, let’s work on the Capistrano configuration. In this case, I’m using the capistrano-ext gem to add multiple environment support so we can deploy to production or our new staging VM:

# config/deploy.rb
set :stages, %w(staging production)
set :default_stage, "staging"
require 'capistrano/ext/multistage'
require "bundler/capistrano"

set :user, 'vagrant'
set :application, "myapp"
set :deploy_to, "/home/#{user}/#{application}"
set :repository,  "git@github.com:acmeco/#{application}"

set :scm, :git
set :branch, "master"
set :deploy_via, :remote_cache
ssh_options[:forward_agent] = true

And in config/deploy/staging.rb:

# 'vagrant' = the hostname of the new VM
role :web, "vagrant"
role :app, "vagrant"
role :db,  "vagrant", :primary => true
set :rails_env, 'staging'

The secret sauce was in the vagrant ssh-config command, which configured ssh so it knows how to log into your new Vagrant VM. Now all we need to do is a simple cap staging deploy and Capistrano will use ssh to connect to the VM and have the VM pull your latest changes from your github repo.

Once deployed, you can tell Vagrant to forward your application’s port in the VM to a localhost port. In my case, I have Unicorn running on port 5000 in the VM, forwarded to port 8080 on localhost in OSX. With this configuration in my Vagrantfile, I can browse to http://localhost:8080 to hit my Rails app running in the VM.

  config.vm.forward_port "unicorn", 5000, 8080

Final note: if you have trouble contacting github, make sure you are running ssh-agent to handle key requests from the VM. This will allow the vagrant user in the VM to act as you when contacting github: run ssh-agent && ssh-add on your local machine (NOT in the VM).

Most of this post is typical Vagrant and capistrano configuration. With just a few simple tricks, we can tie the two together for great victory and hopefully more stability for your site. Good luck!