On the way of getting familiar with Docker people first compare Containers to Virtual machines and ask a question How is Docker different to Virtual Machines (Check my answer on that too). In this post, I want to answer the question I heard recently from one of my colleagues

“Why is deploying Docker Containers better than deploying via RPM package?”

I would like to elaborate on some details and compare Linux Container (e.g. Docker) and the packet manager (RPM or DEB) in that sense in detail.

Package Manager Systems (PMS)

One of the key differences between the two is the concept of dependencies. Package manager systems (PMS) are designed to work with a tree of package dependencies. In fact, PMS is all about the reuse and simplification of the management of this dependency tree. To be more specific:

PMS maintains a host-specific subset of globally available packet tree with packet versions and subtrees

Indeed this is a very powerful tool, but has its natural limitations as well. Packet managers are created in a host-centric paradigm. This paradigm considers the host as a unit of maintenance where the software has to be installed (and configured). Technically it means a set of packages in a particular versions are installed on a particular host. Here PMS tries to be smart and tries to reuse packages. However, PMS does not include any run-time specification and of course no specification of the isolation. A host appears to be the bag where all the packages are thrown into. Therefore in this paradigm usually nobody would come up with the idea of installing several mail servers or API services of the same kind but in different versions on the same host. Even the idea to install multiply databases on the same host via PMS is seen as bad idea in this sense. Let’s summarize why:

  • Your package manager will not allow you (at least not easy) to install the same software in different versions. (Maybe you still will be able to start several instances of the same software by giving specialized configuration to the particular processes)
  • Different software of the same system type (say API service) will still share too many OS resources and may just fail to work without additional configuration.
  • Different kind of software that cannot be installed because already installed software uses some dependent package but in the wrong version. And another kind of problems with Dependency hell.

Why Containers

Contrary to that, Docker doesn’t define any hard dependencies between images/containers. So containers are just isolated from each other, so you cab install any version of the software you like. Here re-usage of disk-space and any I/O is possible by layering. Since the container is also a run-time concept and defines the standard interface, its possible to let the application inside of the container be agnostic to most of the hosts OS specifics and configurations. At the end of the day, it is always possible to link container exported ports to host system ports, so you don’t care during development time about it.

This opens new possibilities and new paradigms. I would say in the cloud-centric paradigm, you do care less about particular hosts (also Virtual machines) - they suddenly start to be the same kind of containers (the cows, not the pets). Thinking that way reduces complexity dramatically. Imagine 10, 100, or better 1000 hosts, that are provisioned by packet managers and may or even by design become unique? Crazy… Tools like puppet and chef (they are still born in the old paradigm) help you to eliminate differences between your hosts still using package manager, however also then cannot prevent configuration drifts completely.

As a side node, the prominent and successful example of new Paradigm of OS is CoreOS, that ships the core and no package manager and the rest is installed as Docker or own container run-time implementation - rocket.

Packets inside, Containers outside

That is evident enough. The usage of Docker as a deployment artifact is just a natural step in the cloud environment, because it gives you the ability to install any software in any version on any host and wire it in a standardized way to the rest. Doing so, you don’t have to care too much about the shape of the cloud during the design of your service.

Utilize the Packet Managers to build your containers. Keep containers slim (yes, we are in the microservices age) and provision them convenient and accurate via packet manager of your choice.

Top 10 why using Docker deployment instead of RPM

Still not convinced? ;) Ok, I try with the top 10 and we can discuss it in the comments :) Use Containers because of:

  • Runtime Isolation: Configurable resource limits
  • Runtime Isolation: Ports re-config even by 3rd party or legacy software
  • No packet dependency hell. Use different versions of PHP, perl, rubby, npm.. whatever on same host…
  • Integrate deployment of third-party or legacy software in your standard Docker deployment
  • Profit from that by unified container boundaries (Logging, monitoring, backup)
  • Easier participate in cloud. As soon you package your software to the standard container and deploy it to cloud, you can start profit from cloud features.
  • Deploy the entire software stack (E.g. DB, engine, web) as one docker image. Might be good idea for pre cloud era software sometimes.
  • Easier to start everything you need on your laptop
  • Existence of A lot of predefined containers for every kind of third party software out there.
  • No distribution borders. Run everything for linux kernel on any distribution.

That is what I currently think we should not deliver any software as Packages anymore…  Feel free to comment.