Simply put, Docker is a set of tools that deliver software in containers. Containers differ from virtualised operating systems in that they are run by a single kernel rather than a kernel for each virtualized operating system. This reduces the bloat required to run and maintain virtual machines allowing more containers to run on the same machine specifications.
What are containers?
A container is a bundle of software and configuration files that together run your application. All the components that your application require, binaries and libraries etc… are all placed inside the container so that your application can run. This results in each container being isolated from the host operating system and other containers. From this, each container should then act the same regardless of its deployment target. This leads to consistent deployments without needing to provision a host operating system each time you make a new deployment.
By using containers you can be assured that all the dependencies your software require are of the correct version and won’t be tampered with by outside applications. For example, if you are running a web application that is using a specific version of NGINX, you would bundle that specific version into the container and never have to worry about another user upgrading NGINX and causing your application to crash. This also means that no one can remove an application that your application depends on.
Keeping clean systems
Docker helps keep a system clean by not having applications installed all over the place. Everything you need is within the container. If you need to clean it up, you just stop and remove the container. All the applications that were required are now gone and your system is left neat. This means you are able to run multiple versions of the same software on the same host making the development of new software much easier.
You can also just spin up a new PostgreSQL database or Ubuntu OS from the command line to test your application against, and when you’re done, just stop and remove the container. Considering the time it would take to install PotsgreSQL or Ubuntu, pulling an image and running it is much quicker and cleaner for your system. It allows you to spend your precious time developing your application and not being caught up in system maintenance.
Docker runs on Linux, macOS and Windows. But the beauty is that the same image can be used to deploy a container onto each of these with no changes. The container doesn’t rely on the operating system, just the software and configuration files bundled into the container itself.
There are some cases where Docker can’t be deployed the same on each system, but this comes down to software that requires specific functionality only offered by that OS.
What Docker isn’t?
It is not a virtualization technology. It makes use of the operating systems virtualisation abilities and uses this to run its containers. It doesn’t provide the container technology itself.
Docker is open source
Yes, it is completely open-source. You can find all the repositories that make up the platform at https://github.com/docker organization page. I would suggest that anyone learning go check out these repositories. In particular, there is a getting-started repository that is a really nice introduction to using Docker. But there is also the documentation repository (which is the source for docs.docker.com, the engine and compose. Be sure to check out the details around the Moby Project while you’re at it.
To make things a little complicated, there is a company named Docker Inc. The company having the same name as the open-source platform can be confusing. Personally, I refer to the company at all times as Docker Inc. and the open-source platform as Docker.
The company offers different service plans ranging from free for individual developers to paid monthly plans for teams. The service includes hosting repositories at Docker Hub, CI/CD options, user management, community tools and support.
By default, images are pulled from Docker Hub. You can change the default registry to a registry you control if you would prefer to use your own registry.
As each container is self-contained with its own filesystem and network setup, if an application is hacked then (providing your containers are being run with the correct privileges) the only access the hacker has is inside the container.
The host is protected by default from any intrusion into the container. There are caveats to this, and you should read the documentation before deploying to a production system to ensure you’re not inadvertently opening up your containers to any attack vectors.
When working with third-party images careful consideration should be taken in the selection process. Docker Hub helps make this choice a bit easier with Official images and certified content.
The official images are a good starting point for developers using Docker. They are published by a dedicated team sponsored by Docker, Inc. By using an image in the official image list you can be assured that best practices are being applied in terms of security and Dockerfile creation.
The image list is carefully curated to include images that serve as the base for common development tasks. From selected OS repositories to programming languages, database solutions and popular services. You should be able to set up the majority of application development tasks with this set of images.
The image list is openly discussed on GitHub. It is here that new images can be proposed to be put on the official list, but it is also where users can contribute to the process with feedback, code improvements or even suggest changes to the process of selecting official images. This open discussion helps keep the list up to date, secure and in line with the development community.
Each image is scanned for vulnerabilities. The results of these scans are uploaded to Docker Hub for any logged-in user to view. You can delve further into the results to see which layers contain components that have vulnerabilities. You can then see a detailed report of the vulnerability by clicking the vulnerable component.
These images are denoted with the “Docker Certified” badge in Docker Hub. Images with this badge indicate that they are of high-quality content and that they are compatible, tested and supported on Docker Enterprise by a verified publisher. By using these images you can trust the technology and know that there is a relationship between Docker and the verified publisher. This is great for enterprise architecture.
There will be more learning Docker articles coming. In the meantime, check out my post on one of the most common Docker issues for new Docker users – Easy way to connect docker to localhost