Docker is still a young project, and as such the ecosystem around it hasn’t quite matured to the point that many people feel quite comfortable using it at this point. It is nice to have such a fast growing set of tools, however the downside to all of this is that many of the tools are not production ready. I think as the ecosystem solidifies and Docker adoption grows we will see a healthy set of solid, production ready tools that are built off of the current generation of tools.
Once you get introduced to the concepts and ideas behind Docker you quickly realize the power and potential that it holds. Inevitably though, there comes a “now what?” moment where you basically realize that Docker can do some interesting things but get stuck because there are barriers to simply dropping Docker into a production environment.
One problem is that you can’t simply “turn on” Docker in your environment, so you need tools to help manage images and containers, manage orchestration, development, etc. So there are a number of challenged to take Docker and start doing useful and interesting things with it once you get past the introductory novelty of building an image and deploying simple containers.
I will attempt to make sense of the current state of Docker and to help take some of the guesswork out of which tools to use in which situations and scenarios for those that are hesitant to adopt Docker. This post will focus mostly around the development aspects of the Docker ecosystem because that is a nice gateway to working with and getting acquainted with Docker.
As you may be aware, Docker does not (yet) support MacOSX or Windows. This can definitely be a hindrance for adopting and building Docker acceptance amongst developers. Boot2Docker massively simplifies this issue by essentially creating a sandbox to work with Docker as a thin layer between Docker and Mac (or Windows) via the boot2docker VM.
You can check it out here, but essentially you will download a package and install it and you are ready to start hacking away on Docker on your Mac. Definitely a must for Mac OS as well as Windows users that are looking to begin their Docker journey, because the complexity is completely removed.
Behind the scenes, a number of things get abstracted away and simplified with Boot2docker, like setting up SSH keys, managing network interfaces, setting up VM integrations and guest additions, etc. Boot2docker also bundles together with a cli for managing the VM that manages docker so it is easy to manage and configure the VM from the terminal.
It would take many blog posts to try to describe everything that CoreOS and its tooling can do. The reason I am mentioning it here is because CoreOS is one of those core building blocks that are recently becoming necessary in any Docker environment. Docker as it is today, is not specifically designed for distributed workloads and as such doesn’t provide much of the tooling around how to solve challenges that accompany distributed systems. However, CoreOS bridges this gap very well.
CoreOS is a minimal Linux distribution that aims to help with a number of Docker related tasks and challenges. It is distributed by its design so can do some really interesting things with images and containers using etcd, systemd, fleetd, confd and others as the platform continues to evolve.
Because of this tooling and philosophy, CoreOS machines can be rebooted on the fly without interrupting services and clustered processes across machines. This means that maintenance can occur whenever and wherever, which makes the resiliency factor very high for CoreOS servers.
Another highlight is its security model, which is a push based model. For example, instead of manually updating servers with security patches, the CoreOS maintainers periodically push updates to servers, alleviating the need to update all the time. This was very nice when the latest shellshock vulnerability was released because within a day or so, a patch was automatically pushed to all CoreOS servers, automating the otherwise tedious process of updating servers, especially without config management tools.
Fig is a must have for anybody that works with Docker on a regular basis, ie developers. Fig allows you to define your environment in a simple YAML config file and then bring up an entire development environment in one command, with fig up.
Fig works very well for a development work flow because you can rapidly prototype and test how Docker images will work together and eliminate issues that might crop up without being able to test things so easily. For example if you are working on an application stack you can simply define how the different containers should work and interact together from the fig file.
The downside to fig is that in its current form, it isn’t really equipped to deal with distributed Docker hosts, something that you will find a large number of projects are attempting to solve and simplify. This shouldn’t be an issue though, if you are aware of its limitations beforehand and know that there are some workloads that fig is not built for.
This is a cool project out of Century Link Labs that aims to solve problems around docker app development and orchestration. Panamax is similar to Fig in that it stitches containers together logically but is slightly different in a few regards. First, Panamax builds off of CoreOS to leverage some of its built in tools, etcd, fleetd, etc. Another thing to note is that currently Panamax only supports single host deployments. The creators of Panamax have stated that clustered support and multi host tenancy is in the works but for now you will have to use Panamax on a single host.
Panamax simplifies Docker images and application orchestration (kind of), in the background and additionally places a nice layer of abstraction on top of this process so that managing the Docker image “stack” becomes even easier, through a slick GUI. With the GUI you can set environment variables, link containers together, bind ports and volumes.
Panamax draws a number of its concepts from Fig. It uses templates as the underlying way to compose containers and applications, which is similar to the Fig config files, as both use YAML files to compose and orchestrate Docker container behavior. Another cool thing about Panamax is that there is a public template repo for getting different application and container stacks up and running, so the community participation is a really nice aspect of the project.
If setting up a command line config file isn’t an ideal solution in your environment, this tool is definitely worth a look. Panamax is a great way to quickly develop and prototype Docker containers and applications.
This is a very young but interesting project. The project looks interesting because of the way that it handles and deals volume management. Right now one of the biggest challenges to widespread Docker adoption is exactly the problem that Flocker solves in its ability to persist storage across distributed hosts.
From their github page:
Flocker is a data volume manager and multi-host Docker cluster management tool. With it you can control your data using the same tools you use for your stateless applications by harnessing the power of ZFS on Linux.
Basically, Flocker is using some ZFS magic behind the scenes to allow volumes to float between servers, to allow for persistent storage across machines and containers. That is a huge win for building distributed systems that require persisnt data and storage, eg databases.
Definitely keep an eye on this project for improvements and look for them to push this area in the future. The creators have said it isn’t production ready just quite yet but is a great tool to use in a test or staging environment.
Flynn touts itself as a Platform as a Service (PasS) built on Docker, in a very similar vein to Heroku. Having a Docker PaaS is a huge win for developers because it simplifies developer workflow. There are some great benefits of having a PaaS in your environment, the subject could easily expand to be its own topic of conversation.
The approach that Flynn takes (and Paas in general) is that operations should be a product team. With Flynn, ops can provide the platform and developers can focus on their tasks much more easily, developing software, testing and generally freeing developers the time to focus on development tasks instead of fighting operations. Flynn does a nice job of decoupling operations tasks from dev tasks so that the developers don’t need to rely on operations to do their work and operations don’t need to concern themselves with development tasks which can cause friction and create efficiency issues.
Flynn works by basically tying a number of different tools together created specifically to solve challenges of building a PaaS to perform their workloads via Docker (scheduling, persistent storage, orchestration, clustering, etc) as one single entity.
Currently its developers state that Flynn is not quite suitable for production use yet, but it is still mature enough to use and play around with and even deploy apps to.
Deis is another PaaS for Docker, aiming to solve the same problems and challenges that Flynn does, so there is definitely some overlap in the projects, as far as end users are concerned. There is a nice CLI tool for manaing and intereacting with Deis and it offers much of the same functionality that either heroku or Flynn offer. Deis can do things like horizontal application scaling, supports many different application frameworks and is Open Source.
Deis is similar in concept to Flynn in that it aims to solve PaaS challenges but they are quite different in their implementation and how they actually achieve their goals.
Both Flynn and Deis aim to create platforms to build Docker apps on top of but do so in somewhat different means. As the creator Deis explains, Deis is very much more practical in its approach to solving PaaS issues because it is basically taking a number of available technologies and tools that have already been created and is fitting them together only creating the pieces that are missing, while Flynn seems to be very much more ambitious in its approach due to the fact that it is implementing a number of its own tooling and solutions, including its own scheduler, registration service, etc and only relying on a few tools that are already in existence. For example, while Flynn does all of these different things, Deis leverages CoreOS to do many of the tasks it needs to operate and work correctly while minimally bolting on tooling that it needs to function correctly.
As the Docker ecosystem continues to evolve, more and more options seem to be sprouting up. There are already a number of great tools in the space but as the community continues to evolve I believe that the current tools will continue to improve and new and useful tools will be built for Docker specific workloads. It is really cool to see how the Docker ecosystem is growing and how the tools and technologies are disrupting traditional views on a number of areas in tech including virtualization, DevOps, development, deployments and application development, among others.
I anticipate the adoption of Docker to continue growing for the foreseeable future as the core Docker project continues to improve and stabilize as well as the tools tools built around it that I have discussed here. It will be interesting to see where things are even six months from now in regards to the adoption and use cases that Docker has created.