Docker, Rails, & Docker Compose together in your development workflow

Posted on by in Docker, Ops, Rails, Web

Docker on Rails

We’ve been trialing the usage of Docker and Docker Compose (previously known as fig) on a Rails project here at Carbon Five. In the past, my personal experience with Docker had been that the promise of portable containerized apps was within reach, but the tooling and development workflow were still awkward – commands were complex, configuration and linking steps were complicated, and the overall learning curve was high.

My team decided to take a peek at the current landscape of Docker tools (primarily boot2docker and Docker Compose) and see how easily we could spin up a new app and integrate it into our development workflow on Mac OS X.

In the end, I’ve found my experience with Docker tools to be surprisingly pleasant; the tooling easily integrates with existing Rails development workflows with only a minor amount of performance overhead. Docker Compose offers a seamless way to build containers and orchestrate their dependencies, and helps lower the learning curve to build Dockerized applications. Read on to find out how we built ours.

Introduction to docker-compose (née Fig).

Docker Compose acts as a wrapper around Docker – it links your containers together and provides syntactic sugar around some complex container linking commands.

We liked Docker Compose for its ability to coordinate and spin up your entire application and dependencies with one command. In the past, frameworks like Vagrant were easy ways to generate a standard image for your development team to use and get started on. Docker Compose offers similar benefits of decoupling the app from the host environment, but also provides the container vehicle for the app to run in all environments – that is, the container you develop in will often be the same container that you deploy to production with.

Docker (with the orchestration tooling provided by Compose) provides us the ability to:

  • Upgrade versions of Ruby or Node (or whatever runtime your app requires) in production with far less infrastructure coordination than normally required.
  • Reduce the number of moving parts in the deployment process. Instead of writing complex Puppet and Capistrano deployment scripts, our deployments will now center around moving images around and starting containers.
  • Simplify developer onboarding by standardizing your team on the same machine images.

In this example, we will run two Docker containers – a Rails container and a MySQL container – and rely on Compose to build, link, and run them.

Installing boot2docker, Docker, and Docker Compose.

Docker runs in a VirtualBox VM through an image called boot2docker. The reason we have to use boot2docker and VirtualBox is because the Mac OSX filesystem is not compatible with the type of filesystem required to support Docker. Hence, we must run our Docker containers within yet another virtual machine.

  1. Download and install VirtualBox.
  2. Now install boot2docker and Docker Compose.
    $ brew install boot2docker docker-compose
  3. Initialize and start up boot2docker
    $ boot2docker init
    $ boot2docker start
  4. Configure your Docker host to point to your boot2docker image.
    $ $(boot2docker shellinit)

    You’ll need to run this for every terminal session that invokes the docker or docker-compose command – better export this line into your .zshrc or .bashrc.

Creating a Dockerfile

Let’s start by creating a Dockerfile for this app. This specifies the base dependencies for our Rails application. We will need:

  • Ruby 2.2 – for our Rails instance
  • NodeJS and NPM – for installation of Karma, jshint, and other JS dependencies.
  • MySQL client – for ActiveRecord tasks
  • PhantomJS – for executing JS-based tests
  • vim – for inspecting and editing files within our container

Create a Dockerfile from within your Rails app directory.

FROM ruby:2.2.0
RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy mysql-client vim
RUN npm install -g phantomjs

RUN mkdir /myapp

COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install

ADD . /myapp
WORKDIR /myapp
RUN RAILS_ENV=production bundle exec rake assets:precompile --trace
CMD ["rails","server","-b",""]

Let’s start by breaking this up line-by-line:

FROM ruby:2.2.0

The FROM directive specifies the library/ruby base image from Docker Hub, and uses the 2.2.0 tag, which corresponds to the Ruby 2.2.0 runtime.

From here on, we are going to be executing commands that will build on this reference image.

RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy mysql-client vim
RUN npm install -g phantomjs

Each RUN command builds up the image, installing specific application dependencies and setting up the environment. Here we install our app dependencies both from apt and npm.

An aside on how a Docker image is built

One of the core concepts in Docker is the concept of “layers”. Docker runs on operating systems that support layering filesystems such as aufs or btrfs. Changes to the filesystem can be thought of as atomic operations that can be rolled forward or backwards.

This means that Docker can effectively store its images as snapshots of each other, much like Git commits. This also has implications as to how we can build up and cache copies of the container as we go along.

The Dockerfile can be thought of as a series of rolling incremental changes to a base image – each command builds on top of the line before. This allows Docker to quickly rebuild changes to the reference image by understanding which lines have changed – and not rebuild the image from scratch each time.

Keep these concepts in mind as we talk about speeding up your Docker build in the following section.

Fast Docker builds by caching your Gemfiles

The following steps install the required Ruby gems for Bundler, within your app container:

COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle install

Note how we sneak the gems into /tmp, then run the bundle install which downloads and installs gems into Bundler’s vendor/bundle directory. This is a cache hack – whereas in the past we would have kept the Gemfiles in with the rest of the application directory in /myapp.

Keeping Gemfiles inline with the app would have meant that the entire bundle install command would have been re-run on each docker-compose build — without any caching — due to the constant change in the code in the /myapp directory.

By separating out the Gemfiles into their own directory, we logically separate the Gemfiles, which are far less likely to change, from the app code, which are far more likely to change. This reduces the number of times we have to wait for a clean bundle install to complete.

HT: Brian Morearty: “How to skip bundle install when deploying a Rails app to Docker”

Adding the app

Finally, we finish our Dockerfile by adding our current app code to the working directory.

ADD . /myapp
WORKDIR /myapp
RUN RAILS_ENV=production bundle exec rake assets:precompile --trace
CMD ["rails","server","-b",""]

This links the contents of the app directory on the host to the /myapp directory within the container.

Note that we precompile all our assets before the container boots up – this ensures that the container is preloaded and ready to run and jives with Docker tenets that a container should be the same container that runs in development, test, and production environments.

Setting up Docker Compose

Now that we’ve defined a Dockerfile for booting our Rails app, we turn to the Compose piece that orchestrates the linking phase between the Rails app and its dependencies – in this case, the DB.

A docker-compose.yml file automatically configures our application ecosystem. Here, it defines our Rails container and its db container:

  build: .
    - .:/myapp
    - "3000:3000"
    - db
    - '.env.web'
  image: library/mysql:5.6.22
    - "13306:3306"
    - '.env.db'

A simple:

$ docker-compose up

will spin up both the web and db instances.

One of the most powerful tools of using Docker Compose is the ability to abstract away the configuration of your server, no matter whether it is running as a development container on your computer, a test container on CI, or on your production Docker host.

The directive:

  - db

will add an entry for db into the Rails’ container’s /etc/hosts, linking the hostname to the correct container. This allows us to write our database.yml like so:

# config/database.yml
development: &default
  host: db

Another important thing to note is the volumes configuration:

# docker-compose.yml
  - .:/myapp

This mounts the current directory . on the host Mac to the /myapp directory in the container. This allows us to make live code changes on the host filesystem and see code changes reflected in the container.

Also note that we make use of Compose’s env_file directive, which allows us to specify environment variables to inject into the container at runtime:

  - '.env.web'

A peek into .env.web shows:

SECRET_KEY_BASE=<Rails secret key>
# ...

Note that the env_file is powerful in that it allows us to swap out environment configurations when you deploy and run your containers. Perhaps your container needs separate configurations on dev than when on CI, or when deployed to staging or on production.

Creating containers and booting them up.

Now it’s time to assemble the container. From within the Rails app, run:

$ docker-compose build

This downloads and builds the containers that your web app and your db will live in, linking them up. You will need to re-run the docker-compose build command every time you change the Dockerfile or Gemfile.

Running your app in containers

You can bring up your Rails server and associated containers by running:

$ docker-compose up

This is a combination of build, link, and start-services command for
each container. You should see output that indicates that both our web and db containers, as configured in the docker-compose.yml file, are booting up.

Development workflow

I was pleasantly surprised to discover that developing with Docker added very little overhead to the development process. In fact, most commands that you would run for Rails simply needed to be prepended with a docker-compose run web.

When you want to run: With Docker Compose, you would run:
bundle install docker-compose run web bundle install
rails s docker-compose run web rails s
rspec spec/path/to/spec.rb docker-compose run web rspec spec/path/to/spec.rb
RAILS_ENV=test rake db:create docker-compose run -e RAILS_ENV=test web rake db:create
tail -f log/development.log docker-compose run web tail -f log/development.log


Here are some nice development tricks I found useful when working with Docker:

  1. Add a dockerhost entry to your /etc/hosts file so you can visit dockerhost from your browser.
    $ boot2docker ip

    Then add the IP to your /etc/hosts  dockerhost

    Now you can pull up your app from dockerhost:3000:

    Screenshot of your URL bar

  2. Debugging containers with docker exec
    Sometimes you need to get inside a container to see what’s really happening. Perhaps you need to test whether a port is truly open, or verify that a process is truly running. This can be accomplished by grabbing the container ID with a docker ps, then passing that ID into the docker exec command:

    $ docker ps
    301fa6331388        myrailsapp_web:latest
    $ docker exec -it 301fa6331388 /bin/bash
  3. Showing environment variables in a container with docker-compose run web env
    $ docker-compose run web env
  4. Running an interactive debugger (like pry) in your Docker container

    It takes a little extra work to get Docker to allow interactive terminal debugging with tools like byebug or pry. Should you desire to start your web server with debugging capabilities, you will need to use the --service-ports flag with the run command.

    $ docker-compose run --service-ports web

    This works due to two internal implementations of docker-compose run:

    • docker-compose run creates a TTY session for your app to connect to, allowing interactive debugging. The default docker-compose up command does not create a TTY session.
    • The run command does not map ports to the Docker host by default. The --service-ports directive maps the container’s ports to the host’s ports, allowing you to visit the container from your web browser.
  5. Use slim images when possible on production

    Oftentimes, your base image will come supplied with a -slim variant on Docker Hub. This usually means that the image maintainer has supplied a trimmed-down version of the container for you to use with source code and build-time files stripped and removed. You can oftentimes shave a couple hundred megabytes off your resulting image — we did when we switched our ruby image from 2.2.1 to 2.2.1-slim. This results in faster deployment times due to less network I/O from the registry to the deployment target.


  • Remember that your app runs in containers – so every time you do a docker-compose run, remember that Compose is spinning up entirely new containers for your code but only if the containers are not up already, in which case they are linked to that (running) container.This means that it’s possible that you’ve spun up multiple instances of your app without thinking about it – for example, you may have a web and db container already up from a docker-compose up command, and then in a separate terminal window you run a docker-compose run web rails c. That spins up another web container to execute the command, but then links that container with the pre-launched db container.
  • There is a small but noticeable performance penalty running through both the VirtualBox VM and docker. I’ve generally noticed waiting a few extra seconds when starting a Rails environment. My overall experience has been that the penalty has not been large enough to be painful.

Try it out

Give this a shot and let me know how Docker has been working for you. What have your experiences been? What are ways in which you’ve been able to get your Docker workflow smoother? Share in the comments below.

Coming up: integration with CI and deployment.

In upcoming blog posts, we will investigate how to use the power of Docker Compose to test and build your containers in a CI-powered workflow, push to Docker registries, and deploy to production. Stay tuned!


  Comments: 39

  1. Hello, thanks for the article.

    I was playing with docker-compose & rails on OSX the other day. And I had a lot of performance issues, booting the App took ~20sec instead of the second it normally takes.

    I’ve been trying to find a solution and 2 came to me. The problems seems to be Virtualbox sync for the /Users folder and people recommend to sync this folder using either rsync or NFS instead of VBox folder sharing ( even with guest additions ). This could be useful

    Did you encountered the same issues ?

    • Steve Jernigan

      We have swapped to using docker and docker-compose across our dev environments. We too saw some configurations with large performance issues. Most of these seemed related to volume mounts. We also ran into problems with performance and permissioning in boot2docker which caused us to swap to vagrant/virtual box. Using NFS worked the best for us. We’ve put the DB data files on a separate volume to make getting a new dump quick. We’re in a good spot now with over 20 containers running everything from databases and search engines to email utilities. Any performance difference is worth the trade for a complete environment. Now we can stop spend effort on debugging differences in lower environments and writing development stubs.

    • Hi there,

      20 seconds vs. one second seems like a lot of time! That can be frustrating. What type of app are you running? Is it particularly large, or contain a lot of large binary assets?

  2. Salim Semaoune

    My solution to the performance issue was to drop boot2docker for vagrant. The problem is caused by poor performances from virtualbox shared folders on OSX (see

    Here is a gist with my Vagrantfile, it uses fig, but with newer versions of docker, you can replace fig with docker-compose (no need to install it).

  3. I experimented with the same docker-compose instructions at about the same time you did (perhaps triggered by the same docker mailing list email). I think adding a third container to the stack which runs tests using something like guard would be ideal – when you start the stack, you see the logs for the db and web containers, and interlaced with this it would be nice to also see the output from guard running your tests. This seems like an easy task – the test container is identical to the web container, except that instead of running ‘rails s’ it triggers guard to run. However, when I do that, guard runs one time then exits, causing the entire stack to shut down cleanly. Commands in the ‘command’ line must be something that locks the terminal actively. Still trying to figure it out. That, and having the database put it’s files in a volume, and this would be the perfect docker+rails stack

    • Martin Stabenfeldt

      Jason, if you start your containers with `docker-compose up -d`, then the other containers will stay alive even if one stops.

    • You can also use rerun with rspec! It can rerun a program if file in directories you selected change and it works with sidekiq containers as well.

  4. Thanks for a great write up, Andrew. I’m using a very similar config. While I love the volumes: -.:/myapp directive for dev environment, this caused issues for me with assets pre-compilation in prod: on new builds, new assets would be pre-compiled as expected, but then when running the new app it would still use the old assets…

    I haven’t had a chance to figure out exactly why this happened yet but I think it was related to the running app writing files back to tmp/cache which would then get re-used when new container is being built. Anyhow… just wondering if you experience any similar issue?

    Looking forward to your next article! I tried tutum but was disappointed to see no native support for docker-compose.yml, so for now I’m just keeping it simple and using a git post-update hook to trigger a docker-compose build / stop / up -d on the host…

    • Hey Seb,

      I did forget to add one extra trick I’ve used to manage precompilation: I’ve changed my development.rb file in my Rails app to point to a different directory, thus skipping the precompiled assets in `public/assets`:

      config.assets.prefix = “/assets_dev”

      Give that a shot and let me know how it works.

      • Aww… that’s a clever trick, thank you Andrew! In my case, I ended up having different docker-compose.yml files as I needed to point to different env_file s anyway. I have an alias setup in my dev environment I use as a shortcut for “docker-compose -f ” so I didn’t really need to solve the issue as I could simply remove the volume-related lines in production, but I think your trick looks like a better solution if you don’t need different env files.

  5. Good luck getting TDD tools like guard to work properly in a Docker environment. guard just doesn’t see any changes.

    • Steve Jernigan

      We have users with guard running on the host, monitoring for changes and triggering CSS compilation.

    • Martin Stabenfeldt

      The best solution I’ve found so far is running Guard on the host machine. The docker environment take care of setting up databases and so. Guard uses the databases found on the dockerhost.
      The only drawback I’ve found so far is that you have to install the gems in the test group on the host.

    • If you are using Docker on top of a VirtualBox VM on Mac and have a shared volume, nginx, apache, and other web servers will not see a change to the files unless you use sendfiles off; in you nginx config or similar config file for other servers

  6. A few weeks ago I’ve started using docker-compose to develop Rails applications. I came with a similar configuration and I noticed that everytime that I modified the Gemfile and I run docker-compose build it install ALL the gems again. Is there a way to avoid this and install only the ones that changed?
    Also I’ve noticed that when I run docker images it generates lots of images that have huge virtual sizes.

    Great post!

  7. Alfonso Gonzalez

    Hello Andrew,

    First of all, thanks for sharing your experience using docker for rails development. Great post!

    Like most of you, I’ve been working on using docker for rails applications. I’ve got to kind of the same Dockerfile and docker-compose.yml.
    When it comes to bundle install, though, there is a lot of discussion. We all agree that one of the greatness of docker is the ability to use the same image across different environments. As in Gemfile you could specify different groups or environments installing different gems accordingly, how would you guys go about this?
    Would you install all the gems in the docker image shipping development/test gems in production that we don’t need increasing the docker image unnecessarily ?
    Or would you create a different docker image for production without the development/test gems on it?

    • Alfonso,

      Great questions. Many people have different solutions for doing this; our solution for our app was to bundle all gems for test, development, deployment into one image. It’s a fairly low profile application, and the performance/security impacts were deigned to be low.

      A possible workaround could be to change out your Dockerfile with a production Dockerfile, which would bundle your tooling and gemset to be deployment-ready.

      You could also parameterize your Gemfile with an environment variable, and have your CI system interpolate the right deployment environment variables at deploy time via the `.env.*` files.

  8. Edwin Lunando

    I followed your guide, but I cannot get the rails console running. I’m using Rails 4.1.6 and docker-compose 1.2.0.

    When I tried to run “docker-compose run web rails c”, the console hang raise an exception.

    [1] pry(main)> Traceback (most recent call last):
    File “/usr/local/bin/docker-compose”, line 9, in
    load_entry_point(‘docker-compose==1.2.0’, ‘console_scripts’, ‘docker-compose’)()
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 31, in main
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 21, in sys_dispatch
    self.dispatch(sys.argv[1:], None)
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 27, in dispatch
    super(Command, self).dispatch(*args, **kwargs)
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 24, in dispatch
    self.perform_command(*self.parse(argv, global_options))
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 59, in perform_command
    handler(project, command_options)
    File “/Library/Python/2.7/site-packages/compose/cli/”, line 345, in run
    dockerpty.start(project.client,, interactive=not options[‘-T’])
    File “/Library/Python/2.7/site-packages/dockerpty/”, line 27, in start
    PseudoTerminal(client, container, interactive=interactive, stdout=stdout, stderr=stderr, stdin=stdin).start()
    File “/Library/Python/2.7/site-packages/dockerpty/”, line 154, in start
    File “/Library/Python/2.7/site-packages/dockerpty/”, line 242, in _hijack_tty
    File “/Library/Python/2.7/site-packages/dockerpty/”, line 164, in do_write
    raise e
    OSError: [Errno 32] Broken pipe

    Can you help me? Thanks.

    • i think it’s because you still need to attach to STDIN. can you use the web console? that might be the easiest solution.

  9. This was a great read, thanks!

  10. Showstoppers:
    1) adding/changing single gem or apt-get dependency requires docker image rebuild. This includes fetching all gems again, probably nullifing any performance gains you might get through deployment improvement or environment isolation.

    2) rails generators create files owned by docker process user i.e. root or whatever container-specific user is in use. In latter case, as I understand, if UID-s are made to match inside container and host machine, expected ownership on files might can be achieved. Until there’s another developer with another host UID…

    • Steve Jernigan

      1. Yes, but you can cheat in development to play with gems. Plus its one person’s pain instead of everyone’s.
      2. Not a problem for us. Code shared in from the host via NFS. Commits always(?) done from host too.

    • Agreed with Steve here. The pain of rebuilding a container’s gems, though great, will only happen once for the developer. Your use case with ownership permissions is interesting, though. We never commit from within the container; only from the host machine. Are you having problems with that use case?

      • How is docker image rebuild going to happen once per developer? It happens every time newer gem version is introduced or new gem is added. Considering how fast ruby gems are updated, this could mean few rebuilds per week depending on the stack. As to files, it’s not the commit issue. If you run rails generator (which you would like to run inside docker since the whole point would be to keep dev env there), it runs under “docker” user with standard setup. Files it generates are owned by docker user, not you. NFS trick Steve mentioned seems a way around it. With some extra complexity for the image setup, but better than sudo chown every time.

    • Colin Mackenzie

      1.) I have found a solution that issue by adding this to my Dockerfile

      # — Add this to your Dockerfile —
      BUNDLE_JOBS=2 \

      at this to your web config in the docker compose

      – bundle

      bundle your gems with docker-compose run web bundle
      no need to build again

      2.) still an issue and annoying

  11. This is all good but you missed the part of database credentials like user and password. It is silly to have them in the database.yml… rather than specify somewhere in the Dockerfile. Thanks.

    • Hey Bogdan,

      I’m curious what you’re referring to with passwords checked into database.yml. We’ve encoded the app to read from the fields as environment variables, then using Docker Compose to write the environment variables to the environment. A sample:

      development: &default
      url: < %= ENV['DATABASE_URL'] %>
      database: myapp_development
      password: < %= ENV['DATABASE_PASSWORD'] %>

      What do you think?

      • Perhaps you were commenting on passwords checked into the `.env.*` files? Those files should not contain production passwords – instead, you should configure your CI system to generate those files at build time, when doing production (or production-like) deployments.

  12. Hi there, nice post!
    Steve or Andrew, i’m curious about the “Yes, but you can cheat in development to play with gems”.. how that works? I agree with Bunter about the need to rebuild everything again after a single gem changes. Maybe you found a way, I mean the hack of using /tmp folder to cache unchanged Gemfile is great but it still needs a different twist I think. That other thing it could be accomplished by using the trick of a volume to store previously downloaded and installed gems explained here:
    Now I think the solution would be a combination of both hacks, but I’m not being able to get it to work, either due to lack of experience with Docker or due to something else I’m missing.
    Love to hear what you guys think.

    • Hey Agustin,

      To reply to you (and everyone else) – the pain of rebuilding the image with a gem change is felt for each developer, every time someone changes the Gemfile. Our team does not directly attempt to solve this pain.

      It also depends on how large your Gemfile is, how large your teams are, and how your team changes gems. In our case, we are a small team with infrequent changes to the gemfile (once a week, at worst). An extra 5 minutes waiting for the container to rebuild isn’t the end of the world.

    • I only meant that you can easily install new gems in your local development container to experiment with them without building the container. The developer who commits an addition to the Gemfile still has to rebuild the container. However, it’s only that developer and decently rare. More importantly, the developers committing Gemfile changes usually have the skills to build a container. Our non-developer staff who run test environments (i.e., designers, qa, etc) can be blissfully unaware of ruby upgrades and gem additions. Pre-docker, I recall the addition or upgrade of some gems (maybe rmagick) that would trigger the reinstall of imagemagick on every development platform. Committing that change was a quick way to be the focus of unwanted chat memes. 🙂

  13. Hi,

    Great post! This is what I am looking for recently.

    I have a small question about the MySQL connection, my database.yml is

    default: &default
    adapter: mysql2
    encoding: utf8
    pool: 5
    username: root
    host: db

    When I ran docker-compose up, it complained like

    /usr/local/bundle/gems/mysql2-0.3.20/lib/mysql2/client.rb:70:in `connect’: Host ‘iotabackend_web_1’ is not allowed to connect to this MySQL server (Mysql2::Error)

    Could you give me any advises about this issue?


  14. Hi,

    Thank you for your great post. It has a really pragmatic approach, which is really great.

    Will you soon publish an article on how to deploy to production as you mentioned at then end of the article?

    I was using until recently Vagrant for development environments and Capistrano for deployment. I switched to Docker (with Docker Compose) for the first part, but I still don’t have a good workflow for production deployment.

    So if you have some insight (with the same pragmatic approach) on this part, that would be appreciated 🙂

  15. I have a problem with my Docker rails instance. I have linked an postgresql img together with the rails app. When I try to run docker compose with rake:db migrate it tells me that the rake command is not found and I should add the missing gem by installing it, but bundle install already installed rake a dozen times… sadly it is not found. And idea how to fix this?

    Kind regards

  16. Here is a solution to the performance problems on OSX with vboxsf using rsync:

  17. I came back here several times to search for the article on deployment but as it turns out there is no such article yet. 😉 Actually your article was the reason why I started using docker and docker-compose back then. On my local machine I have a running system but I want to deploy it and have problems to get started because my docker-compose uses several images for elasticsearch or postgresql for example. It would be very nice if you could add a paragraph on how to deploy a project like the one described above. Or maybe you know someone’s website, where the process is described.

  18. Something I can’t quite reconcile between what I got working and all these guides I’m seeing:

    My Dockerfile only contains two lines:

    FROM ruby:2.3.1-onbuild
    CMD [“bin/rails”, “server”, “–port”, “3000”, “–binding”, “”]

    Despite this, I still see all the bundle install stuff going on when I build the image. So what are all the additional lines in the examples above which seem to be programmatically telling it to install the bundle? Is that old boilerplate for which a newer version of the image has removed the need?

    • You’re absolutely correct! In the 18 months since this article was written, it’s much more elegant to use ONBUILD commands provided by your onbuild image to do library installation and app linking.

      See the source for Ruby’s onbuild Dockerfile: You’ll see that the install and app COPY commands are provided by the parent container for the child container to execute. If you use an onbuild image, you can skip the “bundle install” and “COPY .” commands.

Your feedback