When I first started writing Dockerfile for building a NodeJS app, I will do:
1
2
3
WORKDIR /app
COPY . /app
RUN npm install
Sure, this is very simple, but every time a change in the source file, the entire dependency tree needs to be re-installed. The only time you need to rebuild is when package.json changes.
One trick is to move package.json elsewhere, and build the dependencies, then move back:
1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN mv /tmp/node_modules /app/node_modules
However, if there are a lot of depending packages with lots of files, the last step will take a long time.
How about using symbolic link to shorten the time?
1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN ln -s /tmp/node_modules /app/node_modules
Well, it works, but not every NPM package is happy about it. Some will complain about the softlink, result in errors.
So, how can we reuse caches as much as possible, and:
Do not need to reinstall dependencies
Do not use symbolic link
Here is a solution, just move down the COPY statement:
1
2
3
4
5
6
7
8
9
10
11
12
13
# Set the working directory, which creates the directory.
WORKDIR /app
# Install dependencies.
ADD package.json /tmp
RUN cd /tmp && npm install
# Move the dependency directory back to the app.
RUN mv /tmp/node_modules /app
# Copy the content into the app directory. The previously added "node_modules"
# directory will not be overridden.
COPY . /app
With this configuration, if package.json stays the same, only the last cache layer will be rebuilt when source codes change.
Download the latest (5.7.16make sure it is the latest, because 8.x branch might not work) MySQL Docker image:
1
$ docker pull mysql && docker pull mysql:5.7.16
Run a MySQL container with the data directory mounted from the host:
1
2
3
4
5
6
$ docker run \
--detach \
--name mysql \
--restart always \
--volume /srv/mysql:/var/lib/mysql \
mysql
Environment variables (MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, or MYSQL_PASSWORD) become unnecessary:
Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.[^1]
Now I should be able to access my existing database:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ docker exec -it mysql sh -c 'mysql -uredmine -p redmine'
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.16 MySQL Community Server (GPL)
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
But when accessing MySQL database from another host, we have to update the permissions. Let’s log into MySQL with root user:
1
$ docker exec -it mysql sh -c 'mysql -uroot -p mysql'
Issue SQL commands:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> SELECT Host, User FROM db WHERE Db='redmine';
+-----------+---------+
| Host | User |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)
mysql> SELECT Host, User FROM user WHERE User='redmine';
+-----------+---------+
| host | user |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)
Only localhost is allowed for both the database and the user. This is problematic. How does one know the host of the container yet to be created? The easiest way is to allow access from all hosts:
1
2
3
4
5
6
7
mysql> UPDATE db SET Host='%' WHERE Db='redmine';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> UPDATE user SET Host='%' WHERE User='redmine';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
We can restrict to a set of IP addresses such as 172.17.0.% or sub domains %.example.com. Docker networking uses 172.17.0.x range.
The better way is that when linking containers, should be able to update mysql container /etc/hosts file to include a redmine container field. Then, a wild card is not necessary.
But having to allow all hosts to connect is fine, because all hosts do not mean all hosts. The port is not bind or map to any open port in the host machine. Only the host (172.17.0.1) and other Docker containers inside the host can discover the service provided. Other machines or containers outside the host is not able to access at all.
Restart the container to have the update taking effect:
1
$ docker restart mysql
Redmine
For a basic Redmine setup, there aren’t much to migrate, mainly just three directories, config, files, and log, others are application data. Configuration could be done via environment variables, and if logs weren’t important to migrate, we just need to move the files directory, which is used for file upload.
Let’s Encrypt CA issues short-lived certificates (90 days). Automated renewal process is preferred, recommended, and encouraged. But in a few situations, automated process is not available, here is how to do it manually when SSL certificate was installed with Docker:
First, update the container to the latest version. The latest version can be found from the release page in GitHub.
Let’s Encrypt is a free, open, and automated certificate authority (CA). And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.
First, you need to get Certbot. There are a few ways to install Certbot. But with Docker, you don’t need to install, you just need to download the Docker image and run the container. However, the caveat is that this method does not install the certificate automatically respecting to your web server. But if you’re like me, running your server in another Docker container, this might be the way to go.
Let’s start.
First, download the image. You can download the latest version (tag):
An ENTRYPOINT allows you to configure a container that will run as an executable.[^1]
And the command line arguments to docker run becomes the arguments to certbot command. As we saw earlier to obtain the release version by using --version.
You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.[^1]
For example, to override and run the container without executing the certbot command:
1
2
$ docker run -it --rm --name certbot --entrypoint /bin/bash \
quay.io/letsencrypt/letsencrypt:v0.9.1
But we are more concerning about others, such as exposed port and mapped volumes. The exposed port is 443, HTTPS port. The most important volume (directory) is /etc/letsencrypt. All generated keys and issued certificates can be found in there. Directory /var/lib/letsencrypt is the default working directory, some backup stuff are stored. I have yet to find it useful. However, the logs directory /var/log/letsencrypt is not being used. This could be useful if things went haywire.
When a Docker image has been updated, will restarting the running container via docker restart pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.
Step 2 : CMD while true; do echo bar; sleep 5; done
--->Running in 7fc297e12005
---> a6c04345afb9
Removing intermediate container 7fc297e12005
Successfully built a6c04345afb9
Now we have a different image. The image ID is different: a6c0. But the old image is still there:
1
2
3
4
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example latest a6c04345afb9 24 seconds ago 125.1 MB
<none> <none> 6a56a50ef254 3 minutes ago 125.1MB
Restart the container:
1
2
$ docker restart example
example
Got bar? No still foo all the way with the log. And when you inspect the container, it still uses the old image.
So, docker restart will not pick up the changes from updated image, it will still use the old image built previously. Therefore, the correct way is to drop the container entirely and run it again:
1
2
3
4
$ docker stop example && docker rm example && docker run -d --name example example
Its homepage doesn’t tell you anything. Have to poke around, click on a few links, may or may not get you what you want. If there’s a quick way, even better a CLI method, that will be great.
Couple things we can do. First, when installing docker, we use the URL https://get.docker.com/. It has a path that will return an installation instruction with the version number:
By using GitHub, not only we can get the latest stable release version of Docker, we can also obtain other projects. In fact, if the project was hosted in GitHub, and it was tagged properly with the releases, you can use this method to obtain the version. However, if it’s not properly tagged, such as Node.js, you need to find another way.
Resetting password is one of the most common requests virtually in any system. In GitLab, user password can be updated by visiting the /admin/users page. But if you forgot the password for the root user or the admin user. You need another method to reset it.
Objective
The goal is to simplify the process of resetting GitLab user password by using CLI, so next time when encountering the same problem again, it will be quick and easy.
Settings
Self-hosted GitLab, installed in a Docker container.
Solutions
Step by step procedure to reset the root password is already provided by this GitLab documentation. By converting it into a Bash shell script and placing it in user’s home bin directory as an executable ~/bin/gitlab-password-reset file, we will have created a simple command to be run repeatedly:
1
2
3
4
5
6
7
8
9
10
#!/usr/bin/env bash
echo -n "Email: "; read MAIL
echo -n "Password: "; read-s PASS
echo# Ensure a new line
docker exec gitlab gitlab-rails runner -e production " \
We could simply run the long docker command instead of shell script. But since we’re dealing with password, it’s a good practice to avoid placing sensitive information on command line history log.
No trailing spaces are allowed on the password field, by the way.
Another solution is turning the Ruby evaluation into a script and save into somewhere like /srv/gitlab/config directory. Then, we can just run:
1
$ docker exec gitlab gitlab-rails runner -e production /etc/gitlab/scripts/password-reset.rb
Because we are using Docker to run GitLab, and the following directories are mapped from the host to the guest:
1
2
3
/srv/gitlab/config:/etc/gitlab
/srv/gitlab/logs:/var/log/gitlab
/srv/gitlab/data:/var/opt/gitlab
Therefore, when executing the Ruby script, it’s /etc/gitlab instead of /srv/gitlab. However, you will need to figure out how to get the email and password into the script. That’s for you to answer.
I was planning to build two different Docker images: one for production and one for test. However, the more I had coded, the more frustrated I had become. I want something simple, and more means complexity:
Having multiple images means more configuration files to manage and update.
More things to do when working with third-party tools, for example, Google App Engine Managed VMs deployment will only recognize the application root level Dockerfile out of the box.
Steeper learning curve for new developers.
Keep our application structure simple is important. If you need multiple dockerfiles, then your application is too complex. The difference between npm install --production and npm install isn’t a lot. But it will save you time and effort from managing multiple dockerfiles. There is no reason to have more than one Dockerfile. Just use a different command such as npm test when running tests.