DevOps

Avoiding Dependencies Reinstall When Building NodeJS Docker Image

When I first started writing Dockerfile for building a NodeJS app, I will do:

1
2
3
WORKDIR /app
COPY . /app
RUN npm install

Sure, this is very simple, but every time a change in the source file, the entire dependency tree needs to be re-installed. The only time you need to rebuild is when package.json changes.

One trick is to move package.json elsewhere, and build the dependencies, then move back:

1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN mv /tmp/node_modules /app/node_modules

However, if there are a lot of depending packages with lots of files, the last step will take a long time.

How about using symbolic link to shorten the time?

1
2
3
4
5
WORKDIR /app
ADD package.json /tmp
RUN cd /tmp && npm install
COPY . /app
RUN ln -s /tmp/node_modules /app/node_modules

Well, it works, but not every NPM package is happy about it. Some will complain about the softlink, result in errors.

So, how can we reuse caches as much as possible, and:

  1. Do not need to reinstall dependencies
  2. Do not use symbolic link

Here is a solution, just move down the COPY statement:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Set the working directory, which creates the directory.
WORKDIR /app
# Install dependencies.
ADD package.json /tmp
RUN cd /tmp && npm install
# Move the dependency directory back to the app.
RUN mv /tmp/node_modules /app
# Copy the content into the app directory. The previously added "node_modules"
# directory will not be overridden.
COPY . /app

With this configuration, if package.json stays the same, only the last cache layer will be rebuilt when source codes change.

Registering a Domain with Amazon Route 53 via CLI

Register a domain with Amazon Route 53 via CLI is very straightforward, the only complicate thing is to generate the acceptable JSON structure, then somehow enter the JSON string in the command line without meddling from the shell.

The command is:

1
$ aws route53domains register-domain

The documentation can be found from the AWS CLI Command Reference or issuing:

1
$ aws route53domains register-domain help

Most the command line options require simple string values, but some options like --admin-contact requires a structure data, which is complex to enter in the command line. The easiest way is to forget all other options and use --cli-input-json to get everything in JSON.

First let’s generate the skeleton or template to use:

1
$ aws route53domains register-domain --generate-cli-skeleton

That should output something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
{
"DomainName": "",
"IdnLangCode": "",
"DurationInYears": 0,
"AutoRenew": true,
"AdminContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"RegistrantContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"TechContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"PrivacyProtectAdminContact": true,
"PrivacyProtectRegistrantContact": true,
"PrivacyProtectTechContact": true
}

Fill in the fields, leave out the ones not needed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{
"DomainName": "example.com",
"DurationInYears": 1,
"AutoRenew": true,
"AdminContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"RegistrantContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"TechContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"PrivacyProtectAdminContact": true,
"PrivacyProtectRegistrantContact": true,
"PrivacyProtectTechContact": true
}

Now let’s register by running through jq to clean up extra blanks and end of line characters, and then pipe to xargs command by using new line character as the delimiter instead of a blank by default:

1
2
3
4
5
$ cat input.json | jq -cM . | \
xargs -d '\n' aws route53domains register-domain --cli-input-json
{
"OperationId": "00000000-0000-0000-0000-00000000000"
}

Dockerizing Redmine

Migrate an existing Redmine application into a Docker based container. There are two main steps:

  1. Migrate MySQL database
  2. Migrate Redmine application

MySQL

Create an archive from the source data directory, where data to be migrated:

1
$ sudo tar -C /var/lib/mysql -czf ~/mysql.tar.gz .

The directory /var/lib/mysql is the default location where all database related data are stored.

Download the archive:

1
$ scp example.com:~/mysql.tar.gz .

Extract to the directory where we will map the host volume to the guest:

1
$ sudo mkdir -p /srv/mysql && sudo tar -C $_ -xzf mysql.tar.gz

Download the latest (5.7.16 make sure it is the latest, because 8.x branch might not work) MySQL Docker image:

1
$ docker pull mysql && docker pull mysql:5.7.16

Run a MySQL container with the data directory mounted from the host:

1
2
3
4
5
6
$ docker run \
--detach \
--name mysql \
--restart always \
--volume /srv/mysql:/var/lib/mysql \
mysql

Environment variables (MYSQL_ROOT_PASSWORD, MYSQL_DATABASE, MYSQL_USER, or MYSQL_PASSWORD) become unnecessary:

Do note that none of the variables below will have any effect if you start the container with a data directory that already contains a database: any pre-existing database will always be left untouched on container startup.[^1]

Now I should be able to access my existing database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ docker exec -it mysql sh -c 'mysql -uredmine -p redmine'
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.16 MySQL Community Server (GPL)
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

But when accessing MySQL database from another host, we have to update the permissions. Let’s log into MySQL with root user:

1
$ docker exec -it mysql sh -c 'mysql -uroot -p mysql'

Issue SQL commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> SELECT Host, User FROM db WHERE Db='redmine';
+-----------+---------+
| Host | User |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)
mysql> SELECT Host, User FROM user WHERE User='redmine';
+-----------+---------+
| host | user |
+-----------+---------+
| localhost | redmine |
+-----------+---------+
1 row in set (0.00 sec)

Only localhost is allowed for both the database and the user. This is problematic. How does one know the host of the container yet to be created? The easiest way is to allow access from all hosts:

1
2
3
4
5
6
7
mysql> UPDATE db SET Host='%' WHERE Db='redmine';
Query OK, 1 row affected (0.01 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql> UPDATE user SET Host='%' WHERE User='redmine';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

We can restrict to a set of IP addresses such as 172.17.0.% or sub domains %.example.com. Docker networking uses 172.17.0.x range.

The better way is that when linking containers, should be able to update mysql container /etc/hosts file to include a redmine container field. Then, a wild card is not necessary.

But having to allow all hosts to connect is fine, because all hosts do not mean all hosts. The port is not bind or map to any open port in the host machine. Only the host (172.17.0.1) and other Docker containers inside the host can discover the service provided. Other machines or containers outside the host is not able to access at all.

Restart the container to have the update taking effect:

1
$ docker restart mysql

Redmine

For a basic Redmine setup, there aren’t much to migrate, mainly just three directories, config, files, and log, others are application data. Configuration could be done via environment variables, and if logs weren’t important to migrate, we just need to move the files directory, which is used for file upload.

Renewing Let's Encrypt SSL Certificate with Docker

Let’s Encrypt CA issues short-lived certificates (90 days). Automated renewal process is preferred, recommended, and encouraged. But in a few situations, automated process is not available, here is how to do it manually when SSL certificate was installed with Docker:

First, update the container to the latest version. The latest version can be found from the release page in GitHub.

The latest is v0.9.1:

1
$ docker pull quay.io/letsencrypt/letsencrypt:v0.9.1

Turn off application (if running as a Docker container) to free up the HTTPS port 443:

1
$ docker stop app

Renew the certificate by issuing renew command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ docker run -it --rm -p 443:443 --name certbot \
-v /etc/letsencrypt:/etc/letsencrypt \
-v /var/log/letsencrypt:/var/log/letsencrypt \
quay.io/letsencrypt/letsencrypt:v0.9.1 renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log
-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/example.com.conf
-------------------------------------------------------------------------------
Cert is due for renewal, auto-renewing...
Starting new HTTPS connection (1): acme-v01.api.letsencrypt.org
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for example.com
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0001_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0001_csr-certbot.pem
-------------------------------------------------------------------------------
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/example.com/fullchain.pem
-------------------------------------------------------------------------------
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/example.com/fullchain.pem (success)

Restart app (it’s start not restart):

1
$ docker start app

Check the expiration date:

1
2
3
4
$ echo | openssl s_client -connect example.com:443 2> /dev/null | \
openssl x509 -noout -dates
notBefore=Oct 9 12:00:00 2016 GMT
notAfter=Jan 7 12:00:00 2017 GMT

For more information on renewing, see the Renewing Certificates section from the Certbot documentation.

Settings:

  • Certbot v0.9.1
  • Docker v1.12.1

Getting Let's Encrypt SSL Certificate with Docker

Let’s Encrypt is a free, open, and automated certificate authority (CA). And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.

First, you need to get Certbot. There are a few ways to install Certbot. But with Docker, you don’t need to install, you just need to download the Docker image and run the container. However, the caveat is that this method does not install the certificate automatically respecting to your web server. But if you’re like me, running your server in another Docker container, this might be the way to go.

Let’s start.

First, download the image. You can download the latest version (tag):

1
$ docker pull quay.io/letsencrypt/letsencrypt:latest

But the latest usually is not a stable release:

1
2
$ docker run -it --rm quay.io/letsencrypt/letsencrypt:latest --version
certbot 0.10.0.dev0

Therefore, it’s better to use a specific release, which can be found in Certbot’s GitHub page: https://github.com/certbot/certbot/releases.

The latest one now is v0.9.1. We can pull that from Quay.io:

1
$ docker pull quay.io/letsencrypt/letsencrypt:v0.9.1

Confirm the release version:

1
2
$ docker run -it --rm quay.io/letsencrypt/letsencrypt:v0.9.1 --version
certbot 0.9.1

Let’s take a look at the Docker image:

1
$ docker inspect quay.io/letsencrypt/letsencrypt:v0.9.1

Dropping things don’t care, the output is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[
{
"RepoTags": [
"quay.io/letsencrypt/letsencrypt:v0.9.1"
],
"ContainerConfig": {
"ExposedPorts": {
"443/tcp": {}
},
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ENTRYPOINT \u0026{[\"certbot\"]}"
],
"Volumes": {
"/etc/letsencrypt": {},
"/var/lib/letsencrypt": {}
},
"Entrypoint": [
"certbot"
]
}
}
]

The ENTRYPOINT is certbot binary:

An ENTRYPOINT allows you to configure a container that will run as an executable.[^1]

And the command line arguments to docker run becomes the arguments to certbot command. As we saw earlier to obtain the release version by using --version.

You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.[^1]

For example, to override and run the container without executing the certbot command:

1
2
$ docker run -it --rm --name certbot --entrypoint /bin/bash \
quay.io/letsencrypt/letsencrypt:v0.9.1

But we are more concerning about others, such as exposed port and mapped volumes. The exposed port is 443, HTTPS port. The most important volume (directory) is /etc/letsencrypt. All generated keys and issued certificates can be found in there. Directory /var/lib/letsencrypt is the default working directory, some backup stuff are stored. I have yet to find it useful. However, the logs directory /var/log/letsencrypt is not being used. This could be useful if things went haywire.

Installing Let's Encrypt SSL Certificate on Google App Engine Using Certbot

Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.

Sounds great! However, not yet to be simple and automated, especially working cloud providers such as Google Cloud Platform and its Google App Engine or GAE.

But it’s free. Yes, it’s free. Free software works better. Free certificate authority works better than others.

GAE is a managed service. The place to stored SSL certificate is in separate machines (load balancers). The current automated domain validation by Certbot mostly work with a single machine. Therefore, when the machine issues certificate request is not the same machine to be validated, we need find another way, hopefully an automated method to perform domain validation across machines.

Before creating an automated method, let’s see if we can do it manually. Certbot supports a number of different plugins that can be used to obtain and/or install certificates. A plugin is like an extension that supports a particular web server. Let’s see if we can find a plugin that supports GAE.

Here are some supported by Certbot:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ certbot --help plugins
plugins:
Certbot client supports an extensible plugins architecture. See 'certbot
plugins' for a list of all installed plugins and their names. You can
force a particular plugin by setting options provided below. Running
--help will list flags specific to that plugin.
--apache Obtain and install certs using Apache (default: False)
--nginx Obtain and install certs using Nginx (default: False)
--standalone Obtain certs using a "standalone" webserver. (default:
False)
--manual Provide laborious manual instructions for obtaining a
cert (default: False)
--webroot Obtain certs by placing files in a webroot directory.
(default: False)

And there are also a number of third-party plugins, see the User Guide in Certbot Documentation. But there is none for GAE. It looks like there are only three possible options to try: standalone, webroot and manual.

Let’s start with the standalone method, and issue that from the local machine:

1
$ sudo certbot certonly --standalone -d example.com

If you’re the first time running the command, you will be prompted for email and agreement screens. Both email and agreement can be automated via --email and --agree-tos options. That’s the automated part.

After freeing up the ports 80 and 443, run into some issues:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Failed authorization procedure. example.com (tls-sni-01): urn:acme:error:connection
:: The server could not connect to the client to verify the domain :: Failed to
connect to 0.0.0.0:443 for TLS-SNI-01 challenge, example.com (tls-sni-01):
urn:acme:error:connection :: The server could not connect to the client to verify the
domain :: Failed to connect to 0.0.0.0:443 for TLS-SNI-01 challenge
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com
Type: connection
Detail: Failed to connect to 0.0.0.0:443 for TLS-SNI-01
challenge
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.

The standalone plugin runs its own simple web server to prove that you control the domain. Ownership or domain validation is the key here. It needs the current computer that just issued the certbot command to have a publicly routable IP address. That’s not going to be happening in my local computer behind NAT. And webroot plugin needs a running web server. It can’t be run from the local machine as well. Domain validation are done automatically with both standalone and webroot plugins. Furthermore, domain validation requests are coming from Let’s Encrypt servers, therefore, you can’t have the machine issuing the certificate request behind a NAT or load balancing methods without properly routing the requests.

Since automated methods mostly require the requester and domain owner to be residing on the same machine, we can try to move the request to the Google cloud. Otherwise, there is one more plugin to try, the manual plugin. The manual method (plugin) helps you obtain a cert by giving you instructions to perform domain validation yourself.

Installing Let's Encrypt Certbot 0.8.x on Debian Jessie

Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is “a fully-featured, extensible client for the Let’s Encrypt CA (or any other CA that speaks the ACME protocol) that can automate the tasks of obtaining certificates and configuring webservers to use them.”[^1]

There are a number of ways to obtain and install SSL certificates issued by Let’s Encrypt CA. This is about installing Certbot 0.8.0 release on Debian Jessie. But before continuing, a few things to think about:

The Let’s Encrypt Client (Certbot) presently only runs on Unix-ish OSes that include Python 2.6 or 2.7; Python 3.x support will hopefully be added in the future. … currently it supports modern OSes based on Debian, Fedora, SUSE, Gentoo and Darwin.[^1]

That’s why using Docker container installation method might be a better choice, because it does not mess up your existing libraries and it can use supported operating systems which might not be the one you are using.

Anyhow, the current installation settings are:

  • Debian 8.5 Jessie
  • Python 2.7.9
  • Certbot 0.8.0

Certbot is available for Debian Jessie via backports.

Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates) in a stable environment so that they will run without new libraries (whenever it is possible) on a Debian stable distribution.

Backports cannot be tested as extensively as Debian stable, and backports are provided on an as-is basis, with risk of incompatibilities with other components in Debian stable. Use with care!

It is therefore recommended to select single backported packages that fit your needs, and not use all available backports.

Again, there’s why it might be a better idea to use a container. But, let’s proceed.

Add a new file named backports.list to /etc/apt/sources.list.d/ directory:

1
2
$ sudo bash -c 'echo "deb http://ftp.debian.org/debian jessie-backports main" > \
/etc/apt/sources.list.d/backports.list'

Update:

1
$ sudo apt-get update

All backports are deactivated by default, therefore, to install Certbot package from backports, run:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ sudo apt-get install certbot -t jessie-backports
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
dialog python-acme python-certbot python-cffi-backend python-configargparse python-configobj python-cryptography
python-dialog python-enum34 python-funcsigs python-idna python-ipaddress python-mock python-ndg-httpsclient python-openssl
python-parsedatetime python-pbr python-psutil python-pyasn1 python-pyicu python-requests python-rfc3339 python-six
python-tz python-urllib3 python-zope.component python-zope.event python-zope.interface
Suggested packages:
python-certbot-apache python-certbot-doc python-acme-doc python-configobj-doc python-cryptography-doc
python-cryptography-vectors python-enum34-doc python-funcsigs-doc python-mock-doc python-openssl-doc python-openssl-dbg
python-psutil-doc doc-base python-ntlm
Recommended packages:
letsencrypt
The following NEW packages will be installed:
certbot dialog python-acme python-certbot python-cffi-backend python-configargparse python-configobj python-cryptography
python-dialog python-enum34 python-funcsigs python-idna python-ipaddress python-mock python-ndg-httpsclient python-openssl
python-parsedatetime python-pbr python-psutil python-pyasn1 python-pyicu python-requests python-rfc3339 python-tz
python-urllib3 python-zope.component python-zope.event python-zope.interface
The following packages will be upgraded:
python-six
1 upgraded, 28 newly installed, 0 to remove and 163 not upgraded.
Need to get 1,881 kB of archives.
After this operation, 10.5 MB of additional disk space will be used.
Do you want to continue? [Y/n]

APT option -t lets you have simple control over which distribution packages will be retrieved from. In this case, the distribution jessie-backports is used.

Interesting to know that there is letsencrypt package, could this be the old client? Let’s query the APT’s package cache:

1
2
3
4
5
6
7
8
9
10
11
$ apt-cache show letsencrypt
Package: letsencrypt
Source: python-certbot
Version: 0.8.0-1~bpo8+2
Installed-Size: 29
Maintainer: Debian Let's Encrypt
Architecture: all
Depends: certbot
Description-en: transitional dummy package
This is a transitional dummy package for the rename of letsencrypt to certbot.
It can safely be removed.

Yes, it’s a dummy package. It has been renamed. And from the documentation:

Until May 2016, Certbot was named simply letsencrypt or letsencrypt-auto, depending on install method.[^1]

Let’s poke around on the installed package:

1
2
$ certbot --version
certbot 0.8.0

It’s not yet 1.0.

Obtaining the quick help:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ certbot --help
certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:
(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
Choice of server plugins for obtaining and installing cert:
(the apache plugin is not installed)
--standalone Run a standalone webserver for authentication
(nginx support is experimental, buggy, and not installed by default)
--webroot Place files in a server's webroot folder for authentication
OR use different plugins to obtain (authenticate) the cert and then install it:
--authenticator standalone --installer apache
More detailed help:
-h, --help [topic] print this message, or detailed help on a topic;
the available topics are:
all, automation, paths, security, testing, or any of the subcommands or
plugins (certonly, install, register, nginx, apache, standalone, webroot,
etc.)

Now it’s time to obtain the certificate.

[^1]: Certbot Documentation

Installing Caddy 0.9.x on Ubuntu/Debian System

Install Caddy via its installer script on Ubuntu/Debian system:

1
2
3
4
5
6
7
$ curl -s https://getcaddy.com/ | sudo bash
Downloading Caddy for linux/amd64...
https://caddyserver.com/download/build?os=linux&arch=amd64&arm=&features=
Extracting...
Putting caddy in /usr/local/bin (may require password)
Caddy 0.9.1 (+e8e5595)
Successfully installed

This is different from the Download page, where you get to select additional features (see the &features= URL query parameter).

1
2
$ which caddy
caddy is /usr/local/bin/caddy

Get the installed version:

1
2
$ caddy --version
Caddy 0.9.1 (+e8e5595)

Get help:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$ caddy -h
Usage of caddy:
-agree
Agree to the CA's Subscriber Agreement
-ca string
URL to certificate authority's ACME server directory (default "https://acme-v01.api.letsencrypt.org/directory")
-conf string
Caddyfile to load (default "Caddyfile")
-cpu string
CPU cap (default "100%")
-email string
Default ACME CA account email address
-grace duration
Maximum duration of graceful shutdown (default 5s)
-host string
Default host
-http2
Use HTTP/2 (default true)
-log string
Process log file
-pidfile string
Path to write pid file
-plugins
List installed plugins
-port string
Default port (default "2015")
-quic
Use experimental QUIC
-quiet
Quiet mode (no initialization output)
-revoke string
Hostname for which to revoke the certificate
-root string
Root path of default site (default ".")
-type string
Type of server to run (default "http")
-version
Show version

Run Caddy locally:

1
2
3
4
$ caddy
Activating privacy features... done.
http://:2015
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".

A file descriptor is simply a number that the operating system assigns to an open file to keep track of it. Caddy’s primary goal is to be an easy-to-use static file web server. Having high file descriptor limit means it can open more files to serve users at the same time.

1
2
3
$ ulimit -Sn && ulimit -Hn
1024
4096

The current system is too low in both soft and hard limits. But since it’s not in production, warning can be ignored.

Make sure the server working:

1
2
3
4
5
6
7
8
$ http :2015
HTTP/1.1 404 Not Found
Content-Length: 14
Content-Type: text/plain; charset=utf-8
Server: Caddy
X-Content-Type-Options: nosniff
404 Not Found

Response header X-Content-Type-Options: nosniff prevents MIME based attacks, it tells the browser to respect the response content type, not to override.

Status code 404 means working, but just lacks an index file. Let’s create one:

Installing jq from Source

Packages built in both Ubuntu and Debian packages lack behind, therefore, to get the latest version of jq, build from source.

There are a few prerequisites to install:

  • GCC
  • Make
  • Autotools

Both GCC and Make are usually installed if you do development, but not Autotools. Luckily, this is easy to fulfill:

1
$ sudo apt-get install automake

Install from source:

1
2
3
4
$ sudo git clone https://github.com/stedolan/jq.git
$ cd jq
$ sudo git checkout jq-1.5
$ sudo ./configure && sudo make && sudo make install

The installed path is at:

1
2
$ which jq
jq is /usr/local/bin/jq

However, this gives me an unexpected tag:

1
2
$ jq --version
jq-1.5-dirty

Will Docker Container Restart Pick Up Updated Image?

When a Docker image has been updated, will restarting the running container via docker restart pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.

Let’s start with a Dockerfile:

1
2
3
# Version Foo
FROM debian:8.5
CMD while true; do echo foo; sleep 5; done

The command will keep printing foo every 5 seconds.

Create the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo foo; sleep 5; done
---> Running in 38fdeb15f629
---> 6a56a50ef254
Removing intermediate container 38fdeb15f629
Successfully built 6a56a50ef254

Notice the image ID starting with 6a56.

Start the container:

1
2
$ docker run -d --name example example
dac42e7194e4ec2bdca8e24db29a3333ae2f422d316e341c5cb1499034a4357b

Check the log:

1
2
3
$ docker logs example
foo
foo

This is expected output.

Inspect the container:

1
$ docker inspect example

The important field is the corresponding image, which matches to the previous built image:

1
2
3
4
5
{
...
"Image": "sha256:6a56a50ef254bb1d07117b0a0750ef81fafe9735ab3b0f2b0a14511f38d5b83d"
...
}

Now update the Dockerfile:

1
2
3
# Version Bar
FROM debian:8.5
CMD while true; do echo bar; sleep 5; done

This time it prints bar instead of foo.

Rebuild the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo bar; sleep 5; done
---> Running in 7fc297e12005
---> a6c04345afb9
Removing intermediate container 7fc297e12005
Successfully built a6c04345afb9

Now we have a different image. The image ID is different: a6c0. But the old image is still there:

1
2
3
4
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example latest a6c04345afb9 24 seconds ago 125.1 MB
<none> <none> 6a56a50ef254 3 minutes ago 125.1 MB

Restart the container:

1
2
$ docker restart example
example

Got bar? No still foo all the way with the log. And when you inspect the container, it still uses the old image.

So, docker restart will not pick up the changes from updated image, it will still use the old image built previously. Therefore, the correct way is to drop the container entirely and run it again:

1
2
3
4
$ docker stop example && docker rm example && docker run -d --name example example
example
example
55cec9110fed0257060673a085a08f143003336b1720894f43c6ac5a22104935

The log shows the correct message:

1
2
3
$ docker logs example
bar
bar

Inspecting the container, now it has the correct image:

1
2
3
4
5
6
$ docker inspect example
{
...
"Image": "sha256:a6c04345afb953ab392241f56c04f72110c772a6ee3a36e248c1ffd03f81b7d6"
...
}

And don’t forget to delete the old image.

Settings:

1
2
$ docker --version
Docker version 1.12.0, build 8eab29e