Running containers vs running virtual machines – pros and cons simplified

If you can run multiple Docker containers on a single server and you can run multiple virtual machines on a single server, then how does Docker differ from virtual machines? 

Virtual Machines each require their own operating system to be installed whereas Docker can run multiple containers in one operating system. Immediately you are saving overhead by not requiring duplication of operating system files. You are also saving having to update multiple operating systems when new security patches or general updates are released.

Virtual machines also required dedicated hardware resources to be allocated to each virtual machine (this is just a basic example, there are more modern ways of creating virtual machines that may share resources but they still have more overhead than containers). Docker containers by default will use as little hardware as they need at a given point in time. This means that as opposed to assigning each megabyte of disk space to specific virtual machines, containers will just take what they are allowed as they need it. The same principle applies for networking, CPU and other resources.

How do containers achieve this ability?

Multiple containers can run under one operating system kernel whereas each virtual machine requires its own kernel. Each container takes a share in the host operating system kernel and often they also share common binaries and libraries of the host. This means that the containers become very light weight and the contents of each container usually is very specific to the application that you are deploying.

Virtual machines run under what is known as a hypervisor. A bare-metal hypervisor runs directly on the physical machines hardware and offers good security as there is no relationship with other virtual machines running under the hypervisor. It also allows for low latency between communicating with the hardware of the device. The trade off is that you need to manage this hypervisor through another machine.

There are also other types of hypervisors that run under an operating system. This level of abstraction away from the underlying hardware allows administrators to configure the virtual machines directly on the server and makes it easy to work with the virtual machines from the host operating system. The obvious disadvantage being that having the layer of abstraction between the virtual machine and the hardware will introduce some latency issues.

How does being small help?

Virtual machines can take quite a while longer to start running as they need to boot the operating system and have a larger amount of work to do before they are up and running. Containers on the other hand are typically smaller and can take just seconds to begin running. When working with swarms or clusters, this speed improvement is critical. Let’s take an example of a virtual machine running a web server to host your web site. You make some changes to your code base and want to deploy a new instance. You can deploy your code via some pipeline and update the running instance with no issues.

This isn’t a big task, but what happens when the operating system needs to be updated? You need to create a new image with the new operating system, patch and update, then install your application. Compare that to a container. Depending on your configuration of a Dockerfile, once you’ve created an image for your updated application, you might not need to do any more than rebuild your image and deploy (if you are building from the latest image of an OS). Considering this redeployment may only take seconds to perform, it clearly outperforms virtual machines in this scenario.

Virtual Machines have their benefits

Security remains a big argument in choosing virtual machines over containers. If proper consideration isn’t applied to securing your container you can end up in a situation where a nefarious actor could compromise the shared hardware hosting your containers. The use of public images in creating applications means that developers need to be prudent in their choice of base images. If an image contains exploits then the whole container ecosystem can be compromised.

In a virtual machine architecture, each application deployed to a single virtual machine is protected by the virtualisation of hardware that the virtual machine provides. This means that if the application becomes compromised, then only that virtual machine will be affected by an exploit. Virtual machines can be hardened with respect to security, firewalls etc… and then stored as a well known deployable image for future installations. This ability to image a secure operating system can make it easier for IT administrators to deploy with confidence.

How to secure containers

The best starting point to securing containers is to first secure the host operating system. Once that is done, it is then a careful considerations of any base images you are using to create your application. For example, Docker hub has a list of images that are certified for just this reason. When using these images you are making a reasonable decision regarding both security and performance.

You can look for official repositories to help you, as there may be similar repositories containing malicious code that look like an official image. If you need to use a non-certified base image then you may be running the risk of introducing exploits into your architecture. As a developer, you need to be sure of the source of any of the software that you deploy alongside your own code, so this shouldn’t be new to any developer. 

One tip that can really help you not to pull down malicious images is to enable Docker Content Trust. With Docker Content Trust enabled you won’t be able to pull down unsigned images. This means you can verify both the integrity and the publisher received from a registry over any channel.

In addition to trust of used images is the ability to limit what each container can do. Setting resource limits on your containers is a great start, this way you can be sure that a container won’t get carried away and chew up all your system resources. Some options include setting the amount of memory the container can use and the number of CPU’s. By defining these limits per server you can really set clear boundaries in order to not overwhelm the host operating system. 

In addition to the above there are also some scripts in the official Docker GitHub repository that can help. For example, Docker Bench Security is a script you can run against your Dockerfiles. This script will generate a report on different security aspects of your system. According to the about section this repository, “The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production.”. The script is inspired by the CIS Docker Benchmark 1.2.0. It’s an extremely valuable tool to have, possibly something that could be put in your CI/CD pipeline as a test before deployment. 

For more information on the above article, check out the following links:

For more articles on Docker, check out these previous posts on bernieslearnings.com

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments