Jun 22, 2016
Written by M. Scott Ford

We're Excited about Docker Distributed Application Bundles

If you have a software application that’s generating revenue, a system failure isn’t just inconvenient; it can be detrimental to your bottom line. Being able to recover from a system failure quickly is vital not just to the health of your code, but also to your business.

One of the best ways to recover quickly from a system failure is having good documentation about how your system architecture is set up. That can be challenging, though. It takes a lot of discipline to keep your documentation updated as you make changes to your system. As a result, most codebases have documentation that reflects a prior system state but doesn’t accurately reflect the current system state.

Imagine if your system architecture was a side effect of executing your documentation. Then the easiest way to make a change to your system architecture would be to update your documentation and then execute it. That would ensure that your system architecture and its documentation are always in sync.

In the current tool ecosystem, there are many tools which provide partial documentation. These tools allow teams to describe how each service is constructed. And teams can use those tools to build any of those services from scratch. The description of how those services interact and connect together is still something that those teams have to write down using mostly plain language.

Docker Compose has been a great tool for creating an executable documentation for development environments. Production environments have been more difficult, however. The tooling available for Docker Compose (prior to Docker 1.12) does not

That’s the problem that Docker’s Distributed Application Bundles set out to solve. As someone who regularly works with clients whose system architecture documentation is lacking, I was excited to hear about this update. Here’s a recent experience that might give you a little glimpse into why this is useful. Even though it’s describing a non-critical system, it demonstrates the thought process I had when exploring the idea of solving this problem with Docker and it might be useful for you, too.


I’ve been playing around with SonarQube recently (expect a blog post about that soon). I decided to set up our own hosting for SonarQube and to do so based entirely on Docker and Docker Compose.

I like little projects like this as a way to force me to take a deep dive into a technology. I’ve been using Docker in my development workflow for a few months now, but this SonarQube effort was my first exploration into creating a production environment that uses Docker and Docker Compose.

I used Docker Compose to build a set of containers that worked together to run SonarQube in a production-like fashion. Docker Compose was great for being able to specify this in my development environment so I could quickly experiment until I got things just the way I wanted them. I then expected that I would just be able to run a docker-compose command to deploy those services to a production server. I was also expected that there’d be an easy way to set up something to monitor all of those containers and keep them running for me.

I ended up having to settle for an in between solution. I used docker-machine to create a Docker container host, and then I was able to use the normal docker-compose commands to get the app running there. And that’s awesome! I was impressed with how painless of a process that was.

I did start having some questions though. What if I want to share the management of this container host with others on my team? How would that work? What if the container host gets rebooted? Will the containers start on their own? I’m guessing the answer to that is no. What if one of the containers needs to restart? How should I detect that, and how do I set up handling it? What’s the process I should follow if I want to update one of the containers? Is there a way to do that without making the application unavailable?

My research into the answers to these questions were leading me to lots of discussions about “container orchestration”, which is basically a way of addressing the problems that I was thinking about. There are several different ways to go about doing that, and entire articles could be written on the different solutions that are available. One thing many of those solutions had in common was vendor lock-in, which would force me to build a solution that I couldn’t move around to different hosting providers as my needs change. Avoiding vendor lock-in is one of the things that has excited me about Docker the most, so it was a little disappointed to find myself faced with the problem again with Docker.

That was the state of our SonarQube Docker in production experiment at the end of last week. This week was DockerCon. No one on our team went to the conference. I did expect that there would be some cool announcements to come out of the conference, and I was not disappointed.

Docker announced that version 1.12 of Docker will have container orchestration features built in. The details of their approach makes it clear that they’re not big fans of vendor lock-in either. At the center of their implementation are Distributed Application Bundles, and I’m super excited about these. They basically allow you to describe how a collection of containers work together to create a distributed application, and also defines a process for being able to deploy that set of containers into production environment were the containers are managed by the built-in orchestration features.

I’ve only had the chance to scratch the surface with the experimental support that’s available in the Docker 1.12 release candidate. I’ll be sure to share more as I learn more.

Have any of you given Docker 1.12 a try yet? What are your thoughts?

Want to be alerted when we publish future blogs? Sign up for our newsletter!