Docker

Docker is an open-source engine that automates the deployment of applications into containers. It was written by the team at Docker, Inc (formerly dotCloud Inc, an early player in the Platform-as-a-Service (PAAS) market), and released by them under the Apache 2.0 license.

As a user one should know that it adds an application deployment engine on top of a virtualized container execution environment. It is designed to provide lightweight and fast environment in which to run our code as well as an efficient workflow to get that code from your laptop to your test environment and then into production. Docker is incredibly simple. Indeed, you can get started with it on a minimal host running nothing but a compatible Linux kernel and a Docker binary. 

docker

Docker's mission is to provide:

An easy and lightweight way to model reality : 

Docker is fast. we can Dockerize our application in minutes. It relies on a copy-on-write model so that making changes to our application is also incredibly fast: only what we want to change gets changed. One can then create containers running your applications. Most Docker containers take less than a second to launch. Removing the overhead of the hypervisor also means containers are highly performer and we can pack more of them into our hosts and make the best possible use of your resources.

A logical segregation of duties :

With this, Developers care about their applications running inside containers, and Operations cares about managing the containers. It is designed to enhance consistency by ensuring the environment in which your developers write code matches the environments into which your applications are deployed. This reduces the risk of “worked in dev, now an ops problem.”

Fast, efficient development life cycle :

It aims to reduce the cycle time between code being written and code being tested, deployed, and used. It aims to make your applications portable, easy to build, and easy to collaborate on.

Encourages service orientated architecture :

Docker also encourages service orientated and microservices architectures. Docker recommends that each container run a single application or process. This promotes a distributed application model where an application or service is represents by a series of inter-connected containers. This makes it very easy to distribute, scale, debug and introspect your applications.

Docker components

The core components:

 • The Docker client and server

 • Docker Images

 • Registries

 • Docker Containers

Docker client and server

It is a client-server application. The Docker client talks to the Docker server or daemon, which, in turn, does all the work. it ships with a command line client binary, docker, as well as a full RESTful API. You can run the Docker daemon and client on the same host or connect your local Docker client to a remote daemon running on another host. You can see Docker’s architecture depicted here:

Docker images

Images are the building blocks of the Docker world. You launch your containers from images. Images are the “build” part of Docker’s life cycle. They are a layered format, using Union file systems, that are built step-by-step using a series of instructions. 

For example: • Add a file. • Run a command. • Open a port. You can consider images to be the “source code” for your containers. They are highly portable and can be shared, stored, and updated. In the book, we’ll learn how to use existing images as well as build our own images.

Registries

Docker stores the images you build in registries. There are two types of registries: public and private. Docker, Inc., operates the public registry for images, called the Docker Hub. You can create an account on the Docker Hub and use it to share and store your own images. The Docker Hub also contains, at last count, over 10,000 images that other people have built and shared. 

Want a Docker image for an Nginx web server, the Asterisk open source PABX system, or a MySQL database? All of these are available, along with a whole lot more. You can also store images that you want to keep private on the Docker Hub. These images might include source code or other proprietary information you want to keep secure or only share with other members of your team or organization.

Containers

It helps you build and deploy containers inside of which you can package your applications and services. As we’ve just learnt, containers are launched from images and can contain one or more running processes. You can think about images as the building or packing aspect of Docker and the containers as the running or execution aspect of Docker.

 A container is: • An image format. • A set of standard operations. • An execution environment. Docker borrows the concept of the standard shipping container, used to transport goods globally, as a model for its containers. But instead of shipping goods, Docker containers ship software. Each container contains a software image– its ‘cargo’– and, like its physical counterpart, allows a set of operations to be performed. For example, it can be created, started, stopped, restarted, and destroyed. Like a shipping container, Docker doesn’t care about the contents of the container when performing these actions; for example, whether a container is a web server, a database, or an application server. Each container is loaded the same as any other container.

Why use Docker?

One of the wonderful things about open source is the freedom it offers in selecting the right tools for your tasks. Imagine you’re a lone developer aiming for a simple, tidy environment to test your creations. That’s where it steps in, providing a lightweight solution without the hassle of intricate orchestration. If it is readily available on your system and your peers are well-versed in its toolkit, Docker Community Edition (docker-ce) presents an excellent starting point for diving into the world of containers.

When it comes to finding images for your container engine, platforms like Dockerhub and Quay.io come to the rescue, offering a plethora of options. However, if Docker Community Edition isn’t an option or isn’t supported, consider turning to Podman—it’s a wise alternative.

In the ever-evolving landscape of technology, advocating for open standards remains an ongoing effort. Therefore, it’s crucial to align with projects that uphold and nurture open source principles and standards for the long haul. While the allure of proprietary extras might be tempting initially, it often comes at the cost of sacrificing the flexibility to switch tools down the line. Remember, containers should be liberating, not confining. Stick with solutions that keep your options open.

What can you use Docker for?

Why should you even bother with Docker or containers in general? Well, let’s break it down in simpler terms.

Containers offer a neat little trick: isolation. Think of them as these fantastic sandboxes perfect for testing out all sorts of stuff. But that’s not all. Because they’re sort of like the LEGO bricks of the tech world—standardized and all—they’re great for building services, too.

Here’s the real-world scoop on what it can do:

  • Speeding up your local development and building process, making it smoother, faster, and lighter. Developers love it because they can whip up, run, and share Docker containers like it’s nobody’s business. Plus, you can seamlessly move these containers from development to testing to production.
  • Keeping your services and applications running consistently across different environments. This is gold for setups heavy on micro-services or those big on service-oriented architectures.
  • Being the go-to tool for running tests. Picture this: you’ve got your Continuous Integration (CI) suite like Jenkins CI, and Docker’s there to create these neat little isolated playgrounds for your tests.
  • Being the ultimate prep tool before launching your applications into the wild world of production. You can build and test all those complex apps and setups right on your local machine.
  • Playing a crucial role in setting up multi-user Platform-as-a-Service (PAAS) setups.
  • Providing lightweight sandbox environments for all sorts of tech tinkering. Whether you’re learning the ins and outs of the Unix shell or diving deep into a new programming language, Docker’s got your back.
  • Being the backbone for Software as a Service (SaaS) applications. Ever heard of Memcached as a service? Yeah, Docker’s behind that, too.
  • And let’s not forget its role in those massive, lightning-fast deployments of hosts. Think of it as the secret sauce behind those awe-inspiring hyperscale setups.

Installing Docker

Getting Docker up and running is a breeze. It’s supported on a wide range of Linux platforms, making it super accessible. You’ll find it bundled with popular distributions like Ubuntu and Red Hat Enterprise Linux (RHEL), along with their derivatives such as Debian, CentOS, Fedora, and Oracle Linux.

But hey, if you’re on OS X or Microsoft Windows, don’t fret. You can still join the Docker party using a virtual environment. The Docker team even suggests deploying it on Ubuntu or RHEL hosts, and they provide handy packages to make the process a walk in the park.

In this chapter, I’ll walk you through setting up Docker in four different environments, each serving as a perfect companion to the others:

  • Installing on a Ubuntu-hosted system.
  • Setting up  on a Red Hat Enterprise Linux or any of its derivatives.
  • Getting up and running on OS X with the help of Boot2Docker.
  • And last but not least, setting sail with it on Microsoft Windows, again with the trusty Boot2Docker.

What is a Docker image?

Let’s continue our journey with it by learning a bit more about Docker images. A Docker image is made up of filesystems layered over each other. At the base is a boot filesystem, bootfs, which resembles the typical Linux/Unix boot f ilesystem. A Docker user will probably never interact with the boot filesystem. Indeed, when a container has booted, it is moved into memory, and the boot f ilesystem is unmounted to free up the RAM used by the initrd disk image.

 So far this looks pretty much like a typical Linux virtualization stack. Indeed, Docker next layers a root filesystem, rootfs, on top of the boot filesystem. This rootfs can be one or more operating systems (e.g., a Debian or Ubuntu filesystem). In a more traditional Linux boot, the root filesystem is mounted read-only and then switched to read-write after boot and an integrity check is conducted. In the Docker world, however, the root filesystem stays in read-only mode, and Docker takes advantage of a union mount to add more read-only filesystems onto the root filesystem. A union mount is a mount that allows several filesystems to be mounted at one time but appear to be one filesystem. 

The union mount overlays the filesystems on top of one another so that the resulting filesystem may contain f iles and subdirectories from any or all of the underlying filesystems. Docker calls each of these filesystems images. Images can be layered on top of one another. The image below is called the parent image and you can traverse each layer until you reach the bottom of the image stack where the final image is called the base image. Finally, when a container is launched from an image, Docker mounts a read-write filesystem on top of any layers below. This is where whatever processes we want our Docker container to run will execute.

Using Docker for continuous integration

local website or application). Let’s look at using Docker’s capabilities in a multideveloper continuous integration testing scenario. Docker excels at quickly generating and disposing of one or multiple containers. There’s an obvious synergy with Docker’s capabilities and the concept of continuous integration testing. 

Often in a testing scenario you need to install software or deploy multiple hosts frequently, run your tests, and then clean up the hosts to be ready to run again. In a continuous integration environment, you might need these installation steps and hosts multiple times a day. This adds a considerable build and configuration overhead to your testing lifecycle. Package and installation steps can also be timeconsuming and annoying, especially if requirements change frequently or steps require complex or time-consuming processes to clean up or revert. 

Docker makes the deployment and cleanup of these steps and hosts cheap. To demonstrate this, we’re going to build a testing pipeline in stages using Jenkins CI: Firstly, we’re going to build a Jenkins server that also runs Docker. To make it even more interesting, we’re going to be very recursive and run Docker INSIDE Docker. Turtles all the way down!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top