Docker has been causing a lot of ripples in all sorts of ponds in recent years. I first started playing with it nearly a year ago now, after hearing about it from someone else at work. At first I didn't really understand what problems it was trying to solve. The more I played with it, however, the more interesting it became.
Gripes About Docker
There were plenty of things that I didn't care for about Docker. The most prominent strike against it was how slow it was to start, stop, and destroy containers. I soon learned that if I store my Docker data on a btrfs partition, things become much faster. And it was great! Things that used to take 10 minutes started taking 2 or 3 minutes. Very significant improvement.
But then it was still slow to actually build any containers that are less than trivial. For example, we've been using Docker for one of my side projects since April 2014 (coming from vagrant). Installing all of the correct packages and whatnot inside of a our base Docker image took several minutes. Much longer than it does on iron or even in virtual machines. It was just slow. Anytime we had to update dependencies, we'd invalidate the image cache and spend a large chunk of time just waiting for an image to build. It was/is painful.
On top of that, pushing and pulling from the public registry is much slower than a lot of us would like it to be. We set up a private registry for that side project, but it was still slower than it should be for something like that.
Many of you reading this article have probably read most or all of those gripes from other Docker critics. They're fairly common complaints.
Lately, one of the things about using Docker for development that's become increasingly more frustrating is communication between containers on different hosts. Docker uses environment variables to tell one container how to reach services on another container running on the same host. Using environment variables is a great way to avoid hardcoding IPs and ports in your applications. I love it. However, when your development environment consists of 8+ distinct containers, the behavior around those environment variables is annoying (in my opinion).
Looking For Alternatives
I don't really feel like going into more detail on that right now. Let's just say it was frustating enough for me to look at alternatives (more out of curiosity than really wanting to switch away from Docker). This search led me to straight Linux containers (LXC), upon which Docker was originally built.
I remembered trying to use LXC for a little while back in 2012, and it wasn't a very successful endeavor--probably because I didn't understand containers very well at the time. I also distinctly remember being very fond of Docker when I first tried it because it made LXC easy to use. That's actually how I pitched it to folks.
Long story short, I have been playing with LXC for the past while now. I'm quite happy with it this time around. It seems to better fit the bill for most of the things we have been doing with Docker. In my limited experience with LXC so far, it's generally faster, more flexible, and more mature than Docker.
What proof do I have that it's faster? I have no hard numbers right now, but building one of our Docker images could take anywhere from 10 to 20 minutes. And that was building on top of an already existing base image. The base image took a few minutes to build too, but it was built much less regularly than this other image. So 10-20 minutes just to install the application-specific packages. Not the core packages. Not configure things. Just install additional packages.
Building an entire LXC container from scratch, installing all dependencies, and configuring basically an all-in-one version of the 8 different containers (along with a significant number of other things for monitoring and such) has consistently taken less than 3 minutes on my 2010 laptop. The speed difference is phenominal, and I don't even need btrfs. Lauching the full container is basically as fast as launching a single-purpose Docker container.
What proof do I have that LXC is more flexible than Docker? Have you tried running systemd inside of a Docker container? Yeah, it's not the most intuitive thing in the world (or at least it wasn't the last time I bothered to try it). LXC will let you use systemd without any fuss (that I've noticed, anyway). This probably isn't the greatest example of flexibility in the world of containers, but it certainly works for me.
You also get some pretty interesting networking options, from what I read. Not all of your containers need to be NAT'ed. Some can be NAT'ed and some can be bridged to appear on the same network as the host. I'm still exploring all of these goodies, so don't ask for details about them from me just yet ;)
What proof do I have that LXC is more mature than Docker? Prior to Docker version 0.9, its default execution environment was LXC. Version 0.9 introduced libcontainer, which eliminated Docker's need for LXC. The LXC project has been around since August 2008; Docker has been around since March 2013. That's nearly 5 entire years that LXC has had to mature before Docker was even a thing.
What Now?
Does all of this mean I'll never use Docker again? That I'll use LXC for everything that Docker used to handle for me? No. I will still continue to use Docker for the foreseeable future. I'll just be more particular about when I use it vs when I use LXC.
I still find Docker to be incredibly useful and valuable. I don't think it's as suitable for long-running development environments or to replace a fair amount of what folks have been using Vagrant to do. It can certainly handle that stuff, but LXC seems better suited to the task, at least in my experience.
Why do I think Docker is still useful and valuable? Well, let me share an example from work. We occasionally use a program with rather silly Java requirements. It requires a specific revision, and it must be 32-bit. It's really dumb. Installing and using this program on Ubuntu is really quite easy. Using the program on CentOS, however, is .... quite an adventure. But not an adventure you really want to take. You just want to use that program.
All I had to do was compose a Dockerfile based on Ubuntu, toss a couple apt-get lines in there, build an image, and push it to our registry. Now any of our systems with Docker installed can happily use that program without having to deal with any of the particularities about that one program. The only real requirement now is an operational installation of Docker.
Doing something like that is certainly doable with LXC, but it's not quite as cut and dry. In addition to having LXC installed, you also have to make sure that the container configuration file is suitable for each system where the program will run. This means making sure there's a bridged network adapter on the host, the configuration file uses the correct interface name, at the configuration file doesn't try to use an IP address that's already claimed, etc etc.
Also, Docker gives you port forwarding, bind mounts, and other good stuff with some simple command line parameters. Again, port forwarding and bind mounts are perfectly doable with straight LXC, but it's more complicated than just passing some additional command line parameters.
Anyway. I just wanted to get that out there. LXC will likely replace most of my Linux-based virtual machines for the next while, but Docker still has a place in my toolbox.