Docker in action Book Summary

Omar Ayman
6 min readApr 15, 2020

Author Note

This is my first attempt to try and summarize a book. I chose “Docker in Action” by Jeff Nickoloff as I found it insightful and interesting especially to those who study software engineering or interested in it. People usually use Docker to containerize their applications. I myself have used Docker in deploying applications. To be frank, I wasn’t familiar with it before; however, thanks to the brilliant blogs and stories from various inspiring people on the internet who seek to share knowledge, I started using it. In the beginning, I didn’t feel that I fully wrapped my mind around the functionality of the method. Nevertheless, with the help of this book, I found myself diving deep into Docker’s aspect and I got exposed to different use cases. This is not a summary of the whole book, This story covers chapters from 1–4.

I. Chapter 1.

The chapter revolves mainly around learning how to apply containers in translating the needs of the application you are trying to isolate. In addition to the basic Docker operations and commands, running multiple programs in a container, and cleaning up.

The chapter begins with a story about if someone told you that they want a website that is closely monitored. If the person wants a solution to email them once the server is down as well as using the NGINX server to orchestrate the whole thing using Docker.

There are three programs NGINX(the prgram to be watched), watcher(the program to watch) and mailer(the program to send mails). These programs will run as containers in detached mode i.e in the background. So, let us start with the first program.

1.

docker run — detach \

— name web nginx:latest

Output:

7cb5d2b9a7eab87f07182b5bf58936c9947890995b1b94f412912fa822a9ecb

2. Now run the mailer container, so we will just run the container in the following command;

docker run -d \ 1

— name mailer \

dockerinaction/ch2_mailer

3. Now we should create our watcher to monitor the website but before that, I want to show you how to run it interactively.

docker run — interactive — tty \ 1

— link web:web \

— name web_test \

busybox:1.29 /bin/sh

Now we have done a couple of things:

a. Run a watcher interactively (attaching the container terminal to the host, your computer, terminal).

b. Linked it to the web app to be able to interact with in the future.

Now, let us make an http request to the web app ensuring that it is running using “wget”.

wget -O — http://web:80/

NGINX should be running on your computer. To finish off, let us link the three apps together and run the watcher(agent) again.

docker run -it \

— name agent \

— link web:insideweb \

— link mailer:insidemailer \

dockerinaction/ch2_agent

You can watch logs for any app using;

docker logs web

Sometimes when having too many containers this could create some conflict so you could rename your containers like the following,

docker rename webid webid-old

A. The Legacy Of Container Network Linking

You may notice that the Docker documentation describes network links as a legacy feature. Network links were an early and popular way to connect containers. Links create a unidirectional network connection from one container to other containers using the same host. Significant portions of the container ecosystem asked for fully peered, bidirectional connections between containers. Docker provides this with the user-defined networks. These networks can also extend across a cluster of hosts. Network links and user-defined networks are not equivalent. However, Docker recommends migrating to user-defined networks.

It is uncertain whether the container network linking feature will ever be removed. Numerous useful tools and unidirectional communication patterns depend on connecting, as illustrated by the containers used to inspect and watch the web and mailer components in this section.

Now that we know how to link the container together. Let us link a Wordpress container to MySQL DB

docker run -d — name wp3 \

— link wpdb:mysql \

-p 8000:80 \

— read-only \

-v /run/apache2/ \

— tmpfs /tmp \

wordpress:5.0.0-php7.2-apache

DB needs some configs like db_name and password, so you can always inject these configs in an environment variables

docker create — link wpdb:mysql \

-e WORDPRESS_DB_NAME=client_a_wp \ 1

wordpress:5.0.0-php7.2-apache

Read only filesystems

To get started working on your client’s system, create and start a container from the WordPress image by using the — read-only flag:

docker run -d — name wp — read-only \

wordpress:5.0.0-php7.2-apache

To ensure that this running container won’t be able to change anything inside it

The author wrapped up this chapter with a remarkable sh script to run a Wordpress container per user. Also to give an instance of the app and change the DB username regards the client_id as follows

#!/bin/sh

if [ ! -n “$CLIENT_ID” ]; then 1

echo “Client ID not set”

exit 1

fi

WP_CID=$(docker create \

— link $DB_CID:mysql \ 2

— name wp_$CLIENT_ID \

-p 80 \

— read-only -v /run/apache2/ — tmpfs /tmp \

-e WORDPRESS_DB_NAME=$CLIENT_ID \

— read-only wordpress:5.0.0-php7.2-apache)

docker start $WP_CID

AGENT_CID=$(docker create \

— name agent_$CLIENT_ID \

— link $WP_CID:insideweb \

— link $MAILER_CID:insidemailer \

dockerinaction/ch2_agent)

docker start $AGENT_CID

All this in addition to creating agents (watcher) per client_id, and linking it to the web app and the mailer.

If you saved this file as “start-wp-for-client.sh” and ran it as follows:

CLIENT_ID=dockerinaction ./start-wp-multiple-clients.sh

This image describes the earlier architecture.

II. Ch2. Working With Storage And Volumes

This part focuses on how to share data between the host and a container as well as containers

A. IN-Memory Storage

Web applications use private key files, database passwords, API key files. You can also add in-memory storage to containers with a special type of mount.

What you need to do is to set the type option on the mount flag to “tmpfs”. This is the easiest way to mount a memory-based filesystem into a container’s file tree. Then run this command as follows:

docker run — rm \

— mount type=tmpfs,dst=/tmp \

— entrypoint mount \

alpine:latest -v

B. Docker Volumes

Docker volumes are named filesystem trees managed by Docker. They can be implemented with disk storage on the host filesystem. It is accessible for you to create volumes and share data between host and containers.

You need to get started by creating the volume that will store the Cassandra database files. This volume uses disk space on the local machine and in a part of the filesystem managed by the Docker engine:

docker volume create \

— driver local \

— label example=cassandra \

cass-shared

The volume you just created is not attached to any container, you can consider it as local storage.

Nevertheless, we can make use of this volume by running a container and attaching this volume to it as it shows:

docker run -d \

— volume cass-shared:/var/lib/cassandra/data \

— name cass1 \

cassandra:2.2

Credits

This content has been edited & revised by our wonderful content writer Nabilah Mousa

--

--