Over the past few years, the tech world has embraced container technology,
especially Docker. If you’re new to containers, it might take a little effort to understand their functionality and why they’re so useful. But given their increasing prevalence, such knowledge could prove beneficial to your career. On the most basic level, containers are lightweight virtual machines. The idea of software containers is nothing new—even in the late 1970s, software such as chroot performed similar functions—but many tech pros are just coming around to how useful they are. As you explore Docker (and potential alternatives), you might start to wonder whether you can simply use container services offered by public clouds, specifically Amazon’s EC2 Container Service and the Google Container Engine. The short answer is that those cloud-based services don’t compete directly with Docker and its ilk. Rather, Amazon and Google containers are based on Docker; you can happily use Docker on your local development machines as well as your cloud-based production machines (with the help of the cloud provider’s container services). Before we plunge into detail, let’s look at what people are doing with Docker and how it fits into a typical tech stack. Here are the key benefits of containers:
Isolating Trouble
If you’re running multiple containers within a single server, each running different software, and one container dies, rebooting the entire server could temporarily take down the other containers. But restarting the container itself only restarts the software running inside that container, which is a huge benefit. Because badly behaving software often has a tendency to take down the entire machine with it, containers help you isolate the software so only the container dies, and not the whole server. This way, you can have multiple software packages running within their own containers on a single machine. If one dies, it won’t take everything else along for the ride.
Packaging Images
You can easily deploy images created in Docker to systems with little or no setup. You can customize individual containers (much as you would a running virtual machine), which gives you a lot of flexibility in what you pre-install into the image. For example, you can start with a “basic” Debian image and use it as the basis for launching multiple containers with different types of software installed. Or here’s an alternative example: you can create an image that contains a base operating system along with additional software. When you start containers based on that image, they’ll already have that software pre-installed on it. You can further customize each container launched from an image.
An Image for Every Need
Because images are so easy to create and install, tech pros have created Docker images that range from basic operating systems (such as Debian) to containers with a mix of operating systems and pre-installed software (such as postgres and mongoDB). In the latter case, if you install an image that has postgres already on it (for example), postgres will start right up. A common scenario is to put your individual database servers inside separate containers. You might put MongoDB in one container, and then postgres in another container. Indeed, there are pre-packaged images already built for such purposes. You can find these packages in container repositories such as the
Docker Hub, where there are over a hundred official Docker images, as well as on GitHub and other repository servers. (Thousands of “unofficial” Docker images aren’t supported by Docker, but often work just fine.)
Multiple Versions of Software on a Single Server
One cool thing you can do with Docker is easily install multiple versions of a software package on a single computer and run them side-by-side, usually without conflict. While this might not always be necessary, there are times where an app might be written to interact with a specific version of a database, and require features that are later deprecated or even removed. With Docker, you can easily launch containers based on images that contain different versions of the software. Using a network system included with Docker, you can have the containers communicate with database container they need. Say you want to work on your personal PC; you might have different versions of MySQL installed on your home and office systems. Instead of installing the office version on your home PC—potentially wiping out the latter—you can install the needed version in a container and let it run independently from your existing version. For developers who work from home, this is a great option.
It’s Not as Easy as It Sounds
Although what I’m describing here is pretty straightforward, it takes time to learn and fully understand Docker containers. Fortunately, there are tools that help make your container life easier, including docker-compose, which lets you manage multiple containers simultaneously. You need to spend time studying the intricacies of containers before you let loose and deploy. Here are
some key things to learn:
- Starting and stopping containers
- Deleting containers (and the volumes attached to containers)
- Sharing volumes between containers
- Mapping volumes to your host machine
- How to build code in a container
- How to set up private networks for your containers
- How to build your own images
- Whether software should go on the image or be installed separately in the container.
- Master the Dockerfile concept (a file that describes how to build a container) and docker-compose.yml files (which configure docker-compose)
Now let’s tackle the question presented at the beginning of this article: What about cloud-based containers?
Now for the Cloud Containers
As I mentioned before, Amazon and Google (among others) provide their own cloud container systems that do not actually compete with Docker; the cloud providers offer additional tools for managing Docker containers across multiple virtual servers. This means you can create Docker containers for your development system and also use Docker containers for your production system. Cloud containers trigger a whole new list of things to learn. If you’re planning on letting your containers run side-by-side on a single EC2 instance, for example, you’ll want to know how to configure your system appropriately. Cloud providers include virtual private networking and the ability to provision multiple servers. You’ll need to understand how that works, and how Docker-management systems run atop it. That’s a lot to learn, but well worth it when you consider how many businesses rely on a combination of on-premises and cloud services to get tech-related things done. Just don’t think that these cloud services are somehow “competing” with Docker.