Why switch to Docker deployment (insted of package manager)

On the way of getting familiar with Docker people first compare Containers to Virtual machines and ask question How is Docker different to Virtual Machines (Check my answer on that too). In this post i want to answer the question i heard recently from one of my colleagues

"Why is deploying Docker Containers better than deploying via RPM package?"

I would like to elaborate some details, and compare Linux Container (e.g. Docker) and packet manager (RPM or DEB) in that sense in detail.

Package Manager Systems (PMS)

One of the key differences of the two concept is the concept of dependencies. Package manager systems (PMS) are design to work with a tree of package dependencies. In fact PMS are all about reusage and simplification of the management of this dependency tree. To be more specific:

PMS maintains a host specific subset of global available packet tree with packet versions and subtrees

Indeed very powerful construct but has natural limitations. Packet managers are created in a host-centric paradigm. This paradigm considers host as unit of maintenance where software has to be installed on. Technically it's results in a set of packages in specific version installed on particular host. Here PMS try to be "smart" and tries to reuse packages. However PMS do not include any run-time specification and of course no specification of isolation. A host appears to be the bag where all the packages are thrown into. Therefore in this paradigm usually nobody would come with the idea of installing several mail-servers, or api-services of the same kind but in different versions. Even the idea to install multiply databases on same host via PMS may be seen as stupid in this sense. Let's see why by several examples:

  • Your package manager will not allow you (at least not easy) to install same software in different versions. (Maybe you still be able to start several instances of same software by giving specialized configuration to the particular processes)
  • Different software of same system type (say API service) will still share too much of OS resources and may just fail to work without additional configuration.
  • Different kind of software that cannot be installed because already installed software uses some dependent package but in wrong version. And other kind of problems with Dependency hell.

Containers

Contrary to that Docker don't define any hard dependencies between images/containers that can prevent you to install any version of container you like. Re-usage of software happens by layering. Since container is a run-time concept, and defines standard boundary of container, it is possible to letting application inside of the container be agnostic of OS resources available to the host. E.g. it is always possible to link container exported ports to host system ports, so you don't care to dev time about.

This opens new possibilities and new paradigms. I would say in the cloud-centric paradigm, you less care about particular hosts (also Virtual machines) - they start to be same kind containers. Thinking that way reduce complexity dramatically. Imagine 10, 100, or better 1000 host, that are provisioned by packet managers and may or even by design become unique? Crazy... Tools like puppet and chef (they are still born in old paradigm) help you to eliminate differences between your hosts still using package manager, but there are also examples of new Paradigm. The prominent and successful example is CoreOS that ships the core and no package manager and the rest is installed as Docker or own container run-time implementation - rocket.

Packets inside, Containers outside

That is evidence enough. Use Docker as deployment artefact it just the natural step in the cloud environment (I'm explicitly don't define the word "cloud" now) cause its gives you the ability to install any software in any version on any host and wire it in standardized way to the rest. Doing so you don't have to care about shape of the cloud during the design of your service.

Use Packet Managers to build your containers. Keep containers slim (yes, we are in the microservices age) and provision them convenient and accurate via packet manager of your choice.

Top 10 why using Docker deployment instead of RPM

Still not convinced? ;) Ok, ill' try with top 10 and we can discuss it in the comments :)
You can still use RPM to install Docker, but once you installed it you profit form following

  1. Runtime Isolation: Configurable resource limits
  2. Runtime Isolation: Ports reconfig even by thirtparty or legacy software
  3. No packet dependency hell. Use different versions of PHP, perl, rubby, npm.. whatever on same host...
  4. Integrate deployment of third-party or legacy software in your standard Docker deployment
  5. Profit from that by unified container boundaries (Logging, monitoring, backup)
  6. Easier participate in cloud. As soon you package to standard container and deploy to cloud, you profic from colud features you have (e.g. hot migration, automatic backup, autoscaling... and so on).
  7. Deploy entire software stack (E.g. DB, engine, web) as one docker image. Good idea sometimes.
  8. Easier to start everything you need on your laptop
  9. A lot of predefined containers for every kind of third party software out there.
  10. No distribution borders. Run everything for linux kernel on any distribution.

That is what i'm currently think we shold not deliver any software as Packages anymore...  Feel free to coment.