Docker for your containers

toggle-button

Punnily puns the punster makes. Now, you might be wondering, what's he on about, and can I have some of the same stuff. Sure you can. Today, I want to talk to you about a fairly new, fairly not yet ready for production technology called Linux Containers (LXC), and more specifically Docker, a frontend for enabling them with ease.

All right, so we have a bunch of things to cover, namely learn a bit more about containers, how they are supported in recent Linux kernel versions, what they can do, and what they are good for, and finally, why you might need a docker for these containers, or rather Docker, with a capital D. To wit, you keep reading.

Linux Containers

LXC is an operating system-level virtualization method for running multiple isolated Linux systems on a single host. Hence the word, containers. LXC technology is enabled in the kernel, and it allows you to create abstract layers of isolated resources, which the container operating system can use. The LXC technology differs from virtualization in that it does not create a virtual machine with its own independent stack of hardware and drivers. Instead, it relies on a virtual environment that the host can provide. This means that all contained operating systems must run the same version of the kernel as the host.

The upside of this approach is very little overhead, allowing for a greater number of containers to be executed with a much lower performance penalty compared to virtualization. You can also create minimal containers that only provide a few resources, enhancing separation, isolation and security.

The containment of resources is provided by yet another relatively new technology called Control Groups (cgroups), which allows weighted partitioning and sharing of system resources, including CPU cycles, memory plus swap, disk and network I/O, and namespace isolation. This means that newly created guests can exist without being aware of all the other instances running on the control host.

Just as geeky as it sounds, the implementation is fairly tricky, and takes quite a bit of expertise to setup and get running properly. You will need to be a decent system person with some passable shell coding skills to use the cgroups and LXC with any success.

Enter Docker

The whole idea behind the Docker engine is to provide a lightweight and simple method for wrapping and deploying containers without having to rely on the dirty bits and pieces under the hood. This way, you can create lots of versions of an operating system, with different configurations and security levels, which makes them ideal for software testing, debugging and development.

Now, before we dig deeper and your enthusiasm spikes like mad, you must note that Docker is even less mature than the containers, so do not expect miracles. But a first taste of a brave new world should leave you hungry for more.

Docker setup

Installing Docker is trivial, provided your distribution is listed on the page that reads Get Started! If you're there, good, please proceed, follow the brief set of instructions, and wait a few moments to get the package installed. After that, hit the command line. If you've expected a GUI, not yet.

Your first deployment

You might want to follow the online instructions to get Docker running. But basically, it's all about using the Docker wrapper commands to pull images and setup programs. For example, to get a very basic BASH shell running, based on Ubuntu:

Installing bash container

Or you can try with SSH daemon:

Installing SSHD

Text wise, it looks thusly:

sudo docker run -i -t ubuntu /bin/bash
Unable to find image 'ubuntu' (tag: latest) locally
Pulling repository ubuntu
8dbd9e392a96: Download complete
b750fe79269d: Download complete
27cf78414709: Download complete
root@72c1ab0aa944:/# uname -r
3.11.0-15-generic

Inside the container, you can also execute programs, within the shell. So for example, BASH, top. As you can see from the screenshot of the contained environment, the entire userspace process tree consists of just two processes, and that's very compact and neat. Little to no memory footprint, yet a fully functional and isolated system.

Top command in container

What next?

A lot of hard work. Trivial examples are trivial. But if you consult the official examples, you will familiarize yourself with Catch 22. While Docker does take away some of the complexity of cgroups and LXC, it helps little in making the actual system deployment eaiser. For instance, take a look at the PostgreSQL sample. You will have to create users, configure the network and such. Not a plug-n-play scenario. So you should think hard whether you have the skills and expertise, or patience, to embrace on learning a wrapper technology that is slightly easier than the original thing, but surely not in a revolutionary manner.

Conclusion

Docker seems to be a nice framework, but it will take a few years maturing until it can become a robust replacement for virtualization, or an easy substitute for the raw technology underneath. I like the clean, fresh approach, and in my experiments, the tool behaved well and robustly. But it was a short, fairly superficial test, so it's hard to say what gives when you push everything to MAX.

Anyhow, I hope the developers invest even more time in customization and automation, allowing users to import existing configurations and setups from real hardware, or allowing some kind of seamless provisioning slash migration, which would be the ideal scenario. Finally, you're most likely to dabble in this kind of thing if you need it for work, but it does not hurt to be familiar with the name, the concept and the capabilities. Who knows, you might need them one day.

Cheers.

Please rate this article: 

Your rating: None
4.454545
Average: 4.5 (11 votes)