The engineering behind web applications has evolved tremendously since Docker’s debut. Thanks to Docker, it’s now easier to construct scalable and manageable applications built of microservices. To help you understand what microservices are and how Docker helps implement them, let’s examine a hypothetical situation.
Imagine you have a certain John, Jane and Jason on your web development team. John uses a Mac OSX to develop an application, Jane works on a Windows OS, and Jason has decided that he works best on Debian. The three people use the same OS to develop the very same application but need different environments specifically for different programming languages! Each OS requires its own unique setup. To give all of them what they need you would need about 50 libraries plus their dependencies. Still, it’s almost inevitable that incompatible libraries and languages will conflict with each other across these three different developer-oriented environments. Add in three more environments – staging, testing, and production servers – and you start to get an idea of how difficult it is to assure uniformity across all of them.
What do microservices and Docker have to do with the situation above?
The problem we’ve just described is relevant when you’re building monolithic applications. And it will get much worse if you decide to go with the modern trend and develop a microservices-based application. Since microservices are self-contained, independent application units that each fulfil only one specific business function, they can be considered small applications in their own right. What will happen if you create a dozen microservices for your app? And what if they use different technology stacks? Your company is going to have a tough road ahead of it as there will be many environments – more than developers would have to deal with in a classic monolithic application – that your team will have to take care of moving forward.
But there is a solution: using microservices and containers to encapsulate each microservice. Docker helps you manage those containers as it allows you to build, ship, and run distributed applications. These are called containerized apps because they come housed in an image or container that can be run virtually any time from anywhere. We’ll review the advantages of using Docker for implementing microservices and see how they can help us.
Docker Benefits for Microservices in 2020
Docker, when compared to virtual machines, has always had the potential to change the way apps are built. For years, engineers have relied on virtual machines VMs that can be booted up within seconds or minutes; this is made possible by separating software into virtual images thus enabling them to run in dedicated environments.
Such isolation guarantees that filesystem modifications by one app would not interfere with another. So while containers help us avoid some problems typically associated with virtualization (like performance challenges) they don’t solve it entirely.
This figure illustrates how hypervisor can be used to run more than one operating system on a server. The technology effectively reduces the hardware requirements of running multiple systems simultaneously while enhancing overall efficiency.
Deploying microservices to individual virtual machine instances is inefficient since they are similar to small apps, so it would be more efficient if one could deploy microservices to the same server. However, this might not be possible, since there's overhead involved with using virtual machines not found when using Docker containers. Containers, however, use fewer resources since they occupy far less room on the host machine than separate virtual machines thus enabling you to set up more of them on the same server.
Now that we have a good grip on how to manage single environments, let's talk about managing multiple environments. This is more complex than the single environment case because you will need to keep both versions of your app separated from one another, which may lead to the creation of too many branches and collisions with other libraries.
How Docker Bests Virtual Machines
Containers are like virtual machines, but better because they don’t require resetting after every use. As a result, containers are much faster to start up. Docker not only makes it easier to decrease downtime between training different models on TensorFlow but also ensures that each model is kept isolated from the next model - which means you’ll dramatically reduce the odds of accidentally introducing corruption into one model that affects other models.
Thanks to Docker, there’s no need for each developer in a team to carefully follow 20 pages of operating-system-specific instructions. Instead, one developer can create a stable environment with all the necessary libraries and languages and simply save this setup in the Docker Hub (we’ll talk more about the Hub later). Other developers then only need to load the setup to have the same environment. As you can imagine, Docker can save us a lot of time!
There are multiple benefits of using Docker in development teams. Some of the most popular ones include speed in development, freedom in terms of technology stacks per microservice and consistency in image creation per microservice. With all these features available it is clearer than ever - Docker benefits teams both coders and non-coders alike who just want to get things done!
Good Things About Docker Containers:
- Instant Start-Up - Most containers can start within seconds, as opposed to traditional virtual machines that take minutes to boot up.
- Portability - Developers don’t need to worry about moving entire apps; they can pull and run images on multiple servers with ease.
- Faster Deployments - No need to set up new environments; simply pull an image for deployment on different servers. Web Development Teams especially benefit from the speed of container development because developers can test their code or applications without affecting production systems or websites.
- Docker is very efficient because it can pack multiple applications into a single container, whereas with virtual machines you are limited to one per server.
- It offers support for various operating systems so you can get Docker for Windows, Mac, Debian, and other OSs.
Let’s see how it works under the hood.
To better understand how Docker works and how to use Kubernetes, let's consider a very simple microservice. There are many examples of microservice architectures on the web, but we've created our own for this article.
The application (microservice) shown in the figure consists of three services. The Nginx web server is in charge of routing inbound HTTP requests to the right service to implement a blog for your website. The MySQL database holds details about each post, including its title and date published. Finally, the WordPress blogging engine contains the functionalities required to create new posts and display them when requested by an end-user.
The example above doesn’t cover the entire Docker architecture as containers are only one part of it. The Docker architecture includes three chief components – images, containers, and registries. We'll review each component one by one. But before you can actually use Docker for development technology, the technology you'll need to install Docker on your computer.
To make Docker containers work together, we must first register each service in the app's file docker-compose.yml.
This file is very useful because it helps coordinate all services. Every container should be registered in the container's configuration file to let Docker know what image to pull down from the registry and how that container will link up with other containers running on your system. Docker Compose will register our app services - “nginx,” “wordpress,” and “mysql” - by name within this yml configuration file. We can pass additional options when defining each service if necessary so that they are registered correctly during the fleet update/deployment process - just remember to have correct spacing between directives! Also, note the “volumes” key under the key of our services. That tag merely tells Docker to mount any persistent mounts specified within its volumes key into that service's directory on startup.
Containers, the building blocks of a Docker deployment, are instantiated from images. These blueprints enable consistency across development and production environments and represent a particular layer in a project’s continuous delivery workflow.
Dockerfiles are just text files that explain how an image should be created. Remember how in docker-compose we didn’t specify an image for Nginx? We didn’t want to simply use a ready-made Nginx image, so we wrote: “build.” By doing this, we told Docker to build the Nginx image and apply our own configurations to it.
Here’s an important detail: Docker images do not change once pushed to a registry. We upload them and simply pull them back when we need to make modifications and then push another version out. Each time you want to run the container with your changes, docker starts from the base image we specified in docker-compose.yml or the Dockerfile. Because Docker hides the container's environment from the operating system, we can use a specific version of a library or programming language without ever conflicting with the same version using it on our computer.
In the docker-compose file, each service has an attribute called volumes which are used to store persistent backups of data. Each container can access the same data using the same volume themes. In this way, if different containers are based on different servers, they can still use volumes to share data. By doing this, we ensure that the data will be accessible by all containers and a backup of this data will also exist without having to back it up separately for each container!
Image Registries (Repositories)
So far, we’ve only discussed images and containers. These are the basic components of Docker’s architecture. But so far, we have’ t learned where Docker keeps these images. All of the Nginx, WordPress, and MySQL containers in our example app are built from standard images that are stored at Docker Hub. Here are links to the Nginx, WordPress, and MySQL base images.
Let's say you want more than just these three basic container types for your web application stack. How would you go about adding some other necessary container types quickly? We already did mention registries - another important component of the Docker ecosystem. The hub is an example of a registry; it's basically a repository where all images (the software packages) are stored .
In this section, we've talked about the basic components of Docker. Containers, images and image repositories. We would now like to cover some additional elements that are essential parts of Docker architecture: namespaces, control groups and UnionFS and how they help separate containers so they can’t access each other’s states and manage hardware resources among containers and build blocks for containers.
It's important to note however that while these components might sound new, they're not actually anything novel created by Docker. In fact, they predate Docker in the Linux world and are effective even without Docker. What we've introduced here is a lot of new information that can be boiled down to the following takeaways:
- Dockerfiles are the instructions by which you will modify, create, and manage your images using Docker.
- As each application is now a separate microservice, this is now the area in which overwritten your specifics for each microservice!
- Each microservice within your project now needs its own Dockerfile.
- Docker containers are always created from the specified images per image is what validates that consistency across environments is adhered to - this way if something goes wrong with one of your containers or images it can be traced back to that culprit image! Therefore, if an image results in ambiguity it’s possible to go back to its specific source code/GitHub repository where you can understand what went wrong.
- It's absolutely crucial to register all of your services so they know how they should run properly on their own!
Extending the Architecture of a Microservices-Based App with Docker
We now need to include one more service in the Docker Compose configuration file which is Varnish, then specify the ports through which we can connect to our app. If you decide not to use Varnish, simply remove it from your docker-compose file and update your configurations accordingly. You can find all of the updates necessary for this WordPress container project on GitHub. As you saw before, that's pretty much everything you need to know to manage Docker containers.
Managing Docker-based Apps with Container Orchestration Systems
Docker lets us deploy microservices one by one on a single host (server). A small app (like our example app) with fewer than a dozen services doesn’t need any complex management. But it’s best to be prepared for when your app grows. If you run several servers, how can you deploy several containers across all of them? How can you scale those servers up and down? Docker’s ecosystem comes with Container Orchestration Systems for handling these problems.
Docker Swarm is fantastic for managing large Docker deployments made up of many different services. It can take some effort to implement. Fortunately, Docker now makes it easier than ever to use master server nodes that cluster containers into groups and allows you to scale them independently while maintaining running tasks in the background.
The swarm master node facilitates load balancing so every task across your entire cluster receives the same level of traffic, helping you build distributed systems made up of thousands of ephemeral containers without having to think about setting up load balancers.
Besides Docker Swarm, there are several other container orchestration managers you also could consider:
- Kubernetes and DC/OS. Each one of these is a different type of program that helps you manage and deploy your production applications.
- As we head into the future of containers, we believe that it’ll be best to choose a combination of multiple options available to help handle as much as each one can independently, allowing you as our customer to optimize your productions systems as much as possible.
If you’re interested in cloud-powered solutions that can make your life easier when it comes to running Dockerized applications and, more importantly, orchestrating containers there are some key players to look out for on the platform:
- Google Cloud Platform with support for Kubernetes. There’s also a cloud manager called Google Container Engine that is based on Kubernetes.
- Amazon Web Services works on the same principle as Google’s services do but on AWS you can run Docker containers using EC2 (Elastic Compute Cloud). An extra bit of advice if you want to work with Kubernetes on AWS - use its Elastic Kubernetes Service.
- Azure Container Service is a hosting solution that allows one to deploy and manage Docker containers. It can scale to support workloads at any level and also provides Watson monitoring APIs for tracking application health.
Now all three platforms allow you to use containers with your project - to simply deploy containers with applications, and to can support your work with containers via Kubernetes.
Using microservices and containers is considered the proper modern way to build scalable and manageable web applications. If you don’t containerize your microservices, you’ll face a lot of difficulties when deploying and managing them. This is why we use Docker: because it enables us to avoid troubles when deploying microservices. Add in a Container Orchestration System, and you will be able to handle your dockerized applications without limits.