docker

Avoiding Dependencies Reinstall When Building NodeJS Docker Image

When I first started writing Dockerfile for building a NodeJS app, I will do:

1
2
3
WORKDIR /app
COPY . /app
RUN npm install

Sure, this is very simple, but every time a change in the source file, the entire dependency tree needs to be re-installed. The only time you need to rebuild is when package.json changes.

One trick is to move package.json elsewhere, and build the dependencies, then move back:

1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN mv /tmp/node_modules /app/node_modules

However, if there are a lot of depending packages with lots of files, the last step will take a long time.

How about using symbolic link to shorten the time?

1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN ln -s /tmp/node_modules /app/node_modules

Well, it works, but not every NPM package is happy about it. Some will complain about the softlink, result in errors.

So, how can we reuse caches as much as possible, and:

  1. Do not need to reinstall dependencies
  2. Do not use symbolic link

Here is a solution, just move down the COPY statement:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Set the working directory, which creates the directory.
WORKDIR /app
# Install dependencies.
ADD package.json /tmp
RUN cd /tmp && npm install
# Move the dependency directory back to the app.
RUN mv /tmp/node_modules /app
# Copy the content into the app directory. The previously added "node_modules"
# directory will not be overridden.
COPY . /app

With this configuration, if package.json stays the same, only the last cache layer will be rebuilt when source codes change.

Dockerizing Redmine

Migrate an existing Redmine application into a Docker based container. There are two main steps:

  1. Migrate MySQL database
  2. Migrate Redmine application

MySQL

Create an archive from the source data directory, where data to be migrated:

1
$ sudo tar -C /var/lib/mysql -czf ~/mysql.tar.gz .

The directory /var/lib/mysql is the default location where all database related data are stored.

Download the archive:

1
$ scp example.com:~/mysql.tar.gz .

Extract to the directory where we will map the host volume to the guest:

1
$ sudo mkdir -p /srv/mysql && sudo tar -C $_ -xzf mysql.tar.gz

Download the latest (5.7.16 make sure it is the latest, because 8.x branch might not work) MySQL Docker image:

1
$ docker pull mysql && docker pull mysql:5.7.16

Run a MySQL container with the data directory mounted from the host:

1
2
3
4
5
6
$ docker run \
--detach \
--name mysql \
--restart always \
--volume /srv/mysql:/var/lib/mysql \
mysql

Environment variables (MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, or MYSQL_PASSWORD) become unnecessary:

Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.[^1]

Now I should be able to access my existing database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ docker exec -it mysql sh -c 'mysql -uredmine -p redmine'
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.16 MySQL Community Server (GPL)
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

But when accessing MySQL database from another host, we have to update the permissions. Let’s log into MySQL with root user:

1
$ docker exec -it mysql sh -c 'mysql -uroot -p mysql'

Issue SQL commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> SELECT Host, User FROM db WHERE Db='redmine';
+-----------+---------+
| Host | User |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)
mysql> SELECT Host, User FROM user WHERE User='redmine';
+-----------+---------+
| host | user |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)

Only localhost is allowed for both the database and the user. This is problematic. How does one know the host of the container yet to be created? The easiest way is to allow access from all hosts:

1
2
3
4
5
6
7
mysql> UPDATE db SET Host='%' WHERE Db='redmine';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> UPDATE user SET Host='%' WHERE User='redmine';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

We can restrict to a set of IP addresses such as 172.17.0.% or sub domains %.example.com. Docker networking uses 172.17.0.x range.

The better way is that when linking containers, should be able to update mysql container /etc/hosts file to include a redmine container field. Then, a wild card is not necessary.

But having to allow all hosts to connect is fine, because all hosts do not mean all hosts. The port is not bind or map to any open port in the host machine. Only the host (172.17.0.1) and other Docker containers inside the host can discover the service provided. Other machines or containers outside the host is not able to access at all.

Restart the container to have the update taking effect:

1
$ docker restart mysql

Redmine

For a basic Redmine setup, there aren’t much to migrate, mainly just three directories, config, files, and log, others are application data. Configuration could be done via environment variables, and if logs weren’t important to migrate, we just need to move the files directory, which is used for file upload.

Renewing Let's Encrypt SSL Certificate with Docker

Let’s Encrypt CA issues short-lived certificates (90 days). Automated renewal process is preferred, recommended, and encouraged. But in a few situations, automated process is not available, here is how to do it manually when SSL certificate was installed with Docker:

First, update the container to the latest version. The latest version can be found from the release page in GitHub.

The latest is v0.9.1:

1
$ docker pull quay.io/letsencrypt/letsencrypt:v0.9.1

Turn off application (if running as a Docker container) to free up the HTTPS port 443:

1
$ docker stop app

Renew the certificate by issuing renew command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ docker run -it --rm -p 443:443 --name certbot \
-v /etc/letsencrypt:/etc/letsencrypt \
-v /var/log/letsencrypt:/var/log/letsencrypt \
quay.io/letsencrypt/letsencrypt:v0.9.1 renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log
-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert is due for renewal, auto-renewing...
Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0001_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0001_csr-certbot.pem
-------------------------------------------------------------------------------
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/example.com/fullchain.pem
-------------------------------------------------------------------------------
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/example.com/fullchain.pem (success)

Restart app (it’s start not restart):

1
$ docker start app

Check the expiration date:

1
2
3
4
$ echo | openssl s_client -connect example.com:443 2> /dev/null | \
openssl x509 -noout -dates
notBefore=Oct 9 12:00:00 2016 GMT
notAfter=Jan 7 12:00:00 2017 GMT

For more information on renewing, see the Renewing Certificates section from the Certbot documentation.

Settings:

  • Certbot v0.9.1
  • Docker v1.12.1

Getting Let's Encrypt SSL Certificate with Docker

Let’s Encrypt is a free, open, and automated certificate authority (CA). And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.

First, you need to get Certbot. There are a few ways to install Certbot. But with Docker, you don’t need to install, you just need to download the Docker image and run the container. However, the caveat is that this method does not install the certificate automatically respecting to your web server. But if you’re like me, running your server in another Docker container, this might be the way to go.

Let’s start.

First, download the image. You can download the latest version (tag):

1
$ docker pull quay.io/letsencrypt/letsencrypt:latest

But the latest usually is not a stable release:

1
2
$ docker run -it --rm quay.io/letsencrypt/letsencrypt:latest --version
certbot 0.10.0.dev0

Therefore, it’s better to use a specific release, which can be found in Certbot’s GitHub page: https://github.com/certbot/certbot/releases.

The latest one now is v0.9.1. We can pull that from Quay.io:

1
$ docker pull quay.io/letsencrypt/letsencrypt:v0.9.1

Confirm the release version:

1
2
$ docker run -it --rm quay.io/letsencrypt/letsencrypt:v0.9.1 --version
certbot 0.9.1

Let’s take a look at the Docker image:

1
$ docker inspect quay.io/letsencrypt/letsencrypt:v0.9.1

Dropping things don’t care, the output is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[
{
"RepoTags": [
"quay.io/letsencrypt/letsencrypt:v0.9.1"
],
"ContainerConfig": {
"ExposedPorts": {
"443/tcp": {}
},
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ENTRYPOINT \u0026{[\"certbot\"]}"
],
"Volumes": {
"/etc/letsencrypt": {},
"/var/lib/letsencrypt": {}
},
"Entrypoint": [
"certbot"
]
}
}
]

The ENTRYPOINT is certbot binary:

An ENTRYPOINT allows you to configure a container that will run as an executable.[^1]

And the command line arguments to docker run becomes the arguments to certbot command. As we saw earlier to obtain the release version by using --version.

You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.[^1]

For example, to override and run the container without executing the certbot command:

1
2
$ docker run -it --rm --name certbot --entrypoint /bin/bash \
quay.io/letsencrypt/letsencrypt:v0.9.1

But we are more concerning about others, such as exposed port and mapped volumes. The exposed port is 443, HTTPS port. The most important volume (directory) is /etc/letsencrypt. All generated keys and issued certificates can be found in there. Directory /var/lib/letsencrypt is the default working directory, some backup stuff are stored. I have yet to find it useful. However, the logs directory /var/log/letsencrypt is not being used. This could be useful if things went haywire.

Will Docker Container Restart Pick Up Updated Image?

When a Docker image has been updated, will restarting the running container via docker restart pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.

Let’s start with a Dockerfile:

1
2
3
# Version Foo
FROM debian:8.5
CMD while true; do echo foo; sleep 5; done

The command will keep printing foo every 5 seconds.

Create the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo foo; sleep 5; done
---> Running in 38fdeb15f629
---> 6a56a50ef254
Removing intermediate container 38fdeb15f629
Successfully built 6a56a50ef254

Notice the image ID starting with 6a56.

Start the container:

1
2
$ docker run -d --name example example
dac42e7194e4ec2bdca8e24db29a3333ae2f422d316e341c5cb1499034a4357b

Check the log:

1
2
3
$ docker logs example
foo
foo

This is expected output.

Inspect the container:

1
$ docker inspect example

The important field is the corresponding image, which matches to the previous built image:

1
2
3
4
5
{
...
"Image": "sha256:6a56a50ef254bb1d07117b0a0750ef81fafe9735ab3b0f2b0a14511f38d5b83d"
...
}

Now update the Dockerfile:

1
2
3
# Version Bar
FROM debian:8.5
CMD while true; do echo bar; sleep 5; done

This time it prints bar instead of foo.

Rebuild the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo bar; sleep 5; done
---> Running in 7fc297e12005
---> a6c04345afb9
Removing intermediate container 7fc297e12005
Successfully built a6c04345afb9

Now we have a different image. The image ID is different: a6c0. But the old image is still there:

1
2
3
4
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example latest a6c04345afb9 24 seconds ago 125.1 MB
<none> <none> 6a56a50ef254 3 minutes ago 125.1 MB

Restart the container:

1
2
$ docker restart example
example

Got bar? No still foo all the way with the log. And when you inspect the container, it still uses the old image.

So, docker restart will not pick up the changes from updated image, it will still use the old image built previously. Therefore, the correct way is to drop the container entirely and run it again:

1
2
3
4
$ docker stop example && docker rm example && docker run -d --name example example
example
example
55cec9110fed0257060673a085a08f143003336b1720894f43c6ac5a22104935

The log shows the correct message:

1
2
3
$ docker logs example
bar
bar

Inspecting the container, now it has the correct image:

1
2
3
4
5
6
$ docker inspect example
{
...
"Image": "sha256:a6c04345afb953ab392241f56c04f72110c772a6ee3a36e248c1ffd03f81b7d6"
...
}

And don’t forget to delete the old image.

Settings:

1
2
$ docker --version
Docker version 1.12.0, build 8eab29e

Creating a Data Volume Container in Dockerfile

Create a Docker data volume container in Dockerfile is unbelievably simple, just use the VOLUME instruction:

1
2
FROM debian:8.5
VOLUME ["/data"]

The instruction creates a mount point and attach the volumes from native host or other containers.

Build the data container:

1
2
3
4
5
6
7
8
9
$ docker build -t data .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : VOLUME /data
---> Running in 5511f34a489c
---> 7b723b2b3d13
Removing intermediate container 5511f34a489c
Successfully built 7b723b2b3d13

The built size is just about 125.1 MB.

1
$ docker create --name data data

The first data is the name of the container, the second data is the name of Docker image.

To attach the data volume container to another, we use --volumes-from option:

1
$ docker run -it --rm --name foo --volumes-from=data debian:8.5 /bin/bash

If there’re initial data to copy, then add the COPY instruction:

1
2
3
FROM debian:8.5
VOLUME ["/data"]
COPY . /data

Settings:

1
2
$ docker --version
Docker version 1.11.2, build b9f10c9

Getting the Version of the Latest Release

What’s the latest release of Docker?

Its homepage doesn’t tell you anything. Have to poke around, click on a few links, may or may not get you what you want. If there’s a quick way, even better a CLI method, that will be great.

Couple things we can do. First, when installing docker, we use the URL https://get.docker.com/. It has a path that will return an installation instruction with the version number:

1
2
3
4
5
$ curl https://get.docker.com/builds/
# To install, run the following command as root:
curl -sSL -O https://get.docker.com/builds/Linux/x86_64/docker-1.11.2.tgz && sudo tar zxf docker-1.11.2.tgz -C /
# Then start docker in daemon mode:
sudo /usr/local/bin/docker daemon

There is another way. Well, there is always another way. Docker project is hosted in GitHub, we can use this URL:

1
https://github.com/docker/docker/releases/latest

which will be redirected to the latest release:

1
https://github.com/docker/docker/releases/tag/v1.11.2

Since it’s a redirect, we can use HTTP HEAD method without download the entire response body:

1
2
3
4
5
6
7
8
9
$ curl --silent --head https://github.com/docker/docker/releases/latest
HTTP/1.1 302 Found
Server: GitHub.com
Content-Type: text/html; charset=utf-8
Status: 302 Found
Cache-Control: no-cache
Vary: X-PJAX
Location: https://github.com/docker/docker/releases/tag/v1.11.2
Vary: Accept-Encoding

Extract and process the value of the Location field will get us what we are looking for.

Let’s construct a simple command to obtain such an information:

1
2
3
4
5
6
7
8
9
$ curl \
--silent \
--head \
--url https://github.com/docker/docker/releases/latest | \
grep \
--regexp=^Location | \
cut \
--delimiter=/ \
--fields=8

or:

1
2
3
$ curl -sI https://github.com/docker/docker/releases/latest | \
grep ^Location | \
cut -d / -f 8

Both commands will return v1.11.2.

By using GitHub, not only we can get the latest stable release version of Docker, we can also obtain other projects. In fact, if the project was hosted in GitHub, and it was tagged properly with the releases, you can use this method to obtain the version. However, if it’s not properly tagged, such as Node.js, you need to find another way.

Resetting GitLab User Password with a Simple Shell Script

Problem

Resetting password is one of the most common requests virtually in any system. In GitLab, user password can be updated by visiting the /admin/users page. But if you forgot the password for the root user or the admin user. You need another method to reset it.

Objective

The goal is to simplify the process of resetting GitLab user password by using CLI, so next time when encountering the same problem again, it will be quick and easy.

Settings

Self-hosted GitLab, installed in a Docker container.

Solutions

Step by step procedure to reset the root password is already provided by this GitLab documentation. By converting it into a Bash shell script and placing it in user’s home bin directory as an executable ~/bin/gitlab-password-reset file, we will have created a simple command to be run repeatedly:

1
2
3
4
5
6
7
8
9
10
#!/usr/bin/env bash
echo -n "Email: "; read MAIL
echo -n "Password: "; read -s PASS
echo # Ensure a new line
docker exec gitlab gitlab-rails runner -e production " \
user = User.find_by(email: '$MAIL'); \
user.password = user.password_confirmation = '$PASS'; \
user.save!"

We could simply run the long docker command instead of shell script. But since we’re dealing with password, it’s a good practice to avoid placing sensitive information on command line history log.

No trailing spaces are allowed on the password field, by the way.

Another solution is turning the Ruby evaluation into a script and save into somewhere like /srv/gitlab/config directory. Then, we can just run:

1
$ docker exec gitlab gitlab-rails runner -e production /etc/gitlab/scripts/password-reset.rb

Because we are using Docker to run GitLab, and the following directories are mapped from the host to the guest:

1
2
3
/srv/gitlab/config:/etc/gitlab
/srv/gitlab/logs:/var/log/gitlab
/srv/gitlab/data:/var/opt/gitlab

Therefore, when executing the Ruby script, it’s /etc/gitlab instead of /srv/gitlab. However, you will need to figure out how to get the email and password into the script. That’s for you to answer.

Use a Single Dockerfile

I was planning to build two different Docker images: one for production and one for test. However, the more I had coded, the more frustrated I had become. I want something simple, and more means complexity:

  1. Having multiple images means more configuration files to manage and update.
  2. More things to do when working with third-party tools, for example, Google App Engine Managed VMs deployment will only recognize the application root level Dockerfile out of the box.
  3. Steeper learning curve for new developers.

Keep our application structure simple is important. If you need multiple dockerfiles, then your application is too complex. The difference between npm install --production and npm install isn’t a lot. But it will save you time and effort from managing multiple dockerfiles. There is no reason to have more than one Dockerfile. Just use a different command such as npm test when running tests.

Install Docker on Ubuntu Trusty 14.04

Install Docker on Ubuntu Trusty 14.04 is fairly straightforward:

1
2
3
$ curl -sSL https://get.docker.com/ | sh
$ sudo usermod -aG docker ${USER}
$ exit

Log out, then log back in to verify the installation by running a sample container:

1
$ docker run hello-world

Done!