Installing Let's Encrypt SSL Certificate on Google App Engine Using Certbot

Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.

Sounds great! However, not yet to be simple and automated, especially working cloud providers such as Google Cloud Platform and its Google App Engine or GAE.

But it’s free. Yes, it’s free. Free software works better. Free certificate authority works better than others.

GAE is a managed service. The place to stored SSL certificate is in separate machines (load balancers). The current automated domain validation by Certbot mostly work with a single machine. Therefore, when the machine issues certificate request is not the same machine to be validated, we need find another way, hopefully an automated method to perform domain validation across machines.

Before creating an automated method, let’s see if we can do it manually. Certbot supports a number of different plugins that can be used to obtain and/or install certificates. A plugin is like an extension that supports a particular web server. Let’s see if we can find a plugin that supports GAE.

Here are some supported by Certbot:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ certbot --help plugins
plugins:
Certbot client supports an extensible plugins architecture. See 'certbot
plugins' for a list of all installed plugins and their names. You can
force a particular plugin by setting options provided below. Running
--help will list flags specific to that plugin.
--apache Obtain and install certs using Apache (default: False)
--nginx Obtain and install certs using Nginx (default: False)
--standalone Obtain certs using a "standalone" webserver. (default:
False)
--manual Provide laborious manual instructions for obtaining a
cert (default: False)
--webroot Obtain certs by placing files in a webroot directory.
(default: False)

And there are also a number of third-party plugins, see the User Guide in Certbot Documentation. But there is none for GAE. It looks like there are only three possible options to try: standalone, webroot and manual.

Let’s start with the standalone method, and issue that from the local machine:

1
$ sudo certbot certonly --standalone -d example.com

If you’re the first time running the command, you will be prompted for email and agreement screens. Both email and agreement can be automated via --email and --agree-tos options. That’s the automated part.

After freeing up the ports 80 and 443, run into some issues:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Failed authorization procedure. example.com (tls-sni-01): urn:acme:error:connection
:: The server could not connect to the client to verify the domain :: Failed to
connect to 0.0.0.0:443 for TLS-SNI-01 challenge, example.com (tls-sni-01):
urn:acme:error:connection :: The server could not connect to the client to verify the
domain :: Failed to connect to 0.0.0.0:443 for TLS-SNI-01 challenge
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: example.com
Type: connection
Detail: Failed to connect to 0.0.0.0:443 for TLS-SNI-01
challenge
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.

The standalone plugin runs its own simple web server to prove that you control the domain. Ownership or domain validation is the key here. It needs the current computer that just issued the certbot command to have a publicly routable IP address. That’s not going to be happening in my local computer behind NAT. And webroot plugin needs a running web server. It can’t be run from the local machine as well. Domain validation are done automatically with both standalone and webroot plugins. Furthermore, domain validation requests are coming from Let’s Encrypt servers, therefore, you can’t have the machine issuing the certificate request behind a NAT or load balancing methods without properly routing the requests.

Since automated methods mostly require the requester and domain owner to be residing on the same machine, we can try to move the request to the Google cloud. Otherwise, there is one more plugin to try, the manual plugin. The manual method (plugin) helps you obtain a cert by giving you instructions to perform domain validation yourself.

Installing Let's Encrypt Certbot 0.8.x on Debian Jessie

Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is “a fully-featured, extensible client for the Let’s Encrypt CA (or any other CA that speaks the ACME protocol) that can automate the tasks of obtaining certificates and configuring webservers to use them.”[^1]

There are a number of ways to obtain and install SSL certificates issued by Let’s Encrypt CA. This is about installing Certbot 0.8.0 release on Debian Jessie. But before continuing, a few things to think about:

The Let’s Encrypt Client (Certbot) presently only runs on Unix-ish OSes that include Python 2.6 or 2.7; Python 3.x support will hopefully be added in the future. … currently it supports modern OSes based on Debian, Fedora, SUSE, Gentoo and Darwin.[^1]

That’s why using Docker container installation method might be a better choice, because it does not mess up your existing libraries and it can use supported operating systems which might not be the one you are using.

Anyhow, the current installation settings are:

  • Debian 8.5 Jessie
  • Python 2.7.9
  • Certbot 0.8.0

Certbot is available for Debian Jessie via backports.

Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates) in a stable environment so that they will run without new libraries (whenever it is possible) on a Debian stable distribution.

Backports cannot be tested as extensively as Debian stable, and backports are provided on an as-is basis, with risk of incompatibilities with other components in Debian stable. Use with care!

It is therefore recommended to select single backported packages that fit your needs, and not use all available backports.

Again, there’s why it might be a better idea to use a container. But, let’s proceed.

Add a new file named backports.list to /etc/apt/sources.list.d/ directory:

1
2
$ sudo bash -c 'echo "deb http://ftp.debian.org/debian jessie-backports main" > \
/etc/apt/sources.list.d/backports.list'

Update:

1
$ sudo apt-get update

All backports are deactivated by default, therefore, to install Certbot package from backports, run:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ sudo apt-get install certbot -t jessie-backports
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
dialog python-acme python-certbot python-cffi-backend python-configargparse python-configobj python-cryptography
python-dialog python-enum34 python-funcsigs python-idna python-ipaddress python-mock python-ndg-httpsclient python-openssl
python-parsedatetime python-pbr python-psutil python-pyasn1 python-pyicu python-requests python-rfc3339 python-six
python-tz python-urllib3 python-zope.component python-zope.event python-zope.interface
Suggested packages:
python-certbot-apache python-certbot-doc python-acme-doc python-configobj-doc python-cryptography-doc
python-cryptography-vectors python-enum34-doc python-funcsigs-doc python-mock-doc python-openssl-doc python-openssl-dbg
python-psutil-doc doc-base python-ntlm
Recommended packages:
letsencrypt
The following NEW packages will be installed:
certbot dialog python-acme python-certbot python-cffi-backend python-configargparse python-configobj python-cryptography
python-dialog python-enum34 python-funcsigs python-idna python-ipaddress python-mock python-ndg-httpsclient python-openssl
python-parsedatetime python-pbr python-psutil python-pyasn1 python-pyicu python-requests python-rfc3339 python-tz
python-urllib3 python-zope.component python-zope.event python-zope.interface
The following packages will be upgraded:
python-six
1 upgraded, 28 newly installed, 0 to remove and 163 not upgraded.
Need to get 1,881 kB of archives.
After this operation, 10.5 MB of additional disk space will be used.
Do you want to continue? [Y/n]

APT option -t lets you have simple control over which distribution packages will be retrieved from. In this case, the distribution jessie-backports is used.

Interesting to know that there is letsencrypt package, could this be the old client? Let’s query the APT’s package cache:

1
2
3
4
5
6
7
8
9
10
11
$ apt-cache show letsencrypt
Package: letsencrypt
Source: python-certbot
Version: 0.8.0-1~bpo8+2
Installed-Size: 29
Maintainer: Debian Let's Encrypt
Architecture: all
Depends: certbot
Description-en: transitional dummy package
This is a transitional dummy package for the rename of letsencrypt to certbot.
It can safely be removed.

Yes, it’s a dummy package. It has been renamed. And from the documentation:

Until May 2016, Certbot was named simply letsencrypt or letsencrypt-auto, depending on install method.[^1]

Let’s poke around on the installed package:

1
2
$ certbot --version
certbot 0.8.0

It’s not yet 1.0.

Obtaining the quick help:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ certbot --help
certbot [SUBCOMMAND] [options] [-d domain] [-d domain] ...
Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
it will attempt to use a webserver both for obtaining and installing the
cert. Major SUBCOMMANDS are:
(default) run Obtain & install a cert in your current webserver
certonly Obtain cert, but do not install it (aka "auth")
install Install a previously obtained cert in a server
renew Renew previously obtained certs that are near expiry
revoke Revoke a previously obtained certificate
register Perform tasks related to registering with the CA
rollback Rollback server configuration changes made during install
config_changes Show changes made to server config during installation
plugins Display information about installed plugins
Choice of server plugins for obtaining and installing cert:
(the apache plugin is not installed)
--standalone Run a standalone webserver for authentication
(nginx support is experimental, buggy, and not installed by default)
--webroot Place files in a server's webroot folder for authentication
OR use different plugins to obtain (authenticate) the cert and then install it:
--authenticator standalone --installer apache
More detailed help:
-h, --help [topic] print this message, or detailed help on a topic;
the available topics are:
all, automation, paths, security, testing, or any of the subcommands or
plugins (certonly, install, register, nginx, apache, standalone, webroot,
etc.)

Now it’s time to obtain the certificate.

[^1]: Certbot Documentation

Randomizing an Array with Sort

How to randomize an array? Use the sort command, with the option:

1
2
-R, --random-sort
sort by random hash of keys

For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ seq 1 10 | sort -R
4
2
10
6
3
9
7
5
8
1
$ seq 1 10 | sort --random-sort
9
6
1
3
2
8
7
5
4
10

Listing Tags in Natural Sort of Version Numbers

Using the Node.js repository as an example:

1
2
3
$ git remote -v
origin https://github.com/nodejs/node.git (fetch)
origin https://github.com/nodejs/node.git (push)

If we would like to list all tags with v0.12 versions, we could do:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ git tag -l 'v0.12.*'
v0.12.0
v0.12.1
v0.12.10
v0.12.11
v0.12.12
v0.12.13
v0.12.14
v0.12.15
v0.12.2
v0.12.3
v0.12.4
v0.12.5
v0.12.6
v0.12.7
v0.12.8
v0.12.8-rc.1
v0.12.9

However, v0.12.2 should come after v0.12.1.

To fix it, we use the sort command with option:

1
2
-V, --version-sort
natural sort of (version) numbers within text

Thus:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ git tag -l 'v0.12.*' | sort --version-sort
v0.12.0
v0.12.1
v0.12.2
v0.12.3
v0.12.4
v0.12.5
v0.12.6
v0.12.7
v0.12.8
v0.12.8-rc.1
v0.12.9
v0.12.10
v0.12.11
v0.12.12
v0.12.13
v0.12.14
v0.12.15

Streaming HTTP Request Directly to Response in Node.js

This is a Node.js starting script to stream HTTP request directly into response:

1
2
3
require('http').createServer((req, res) => {
req.pipe(res); // Pipe request directly to response
}).listen(3000);

It behaves almost like an echo, you get back whatever you sent. For example, use HTTPie to make a request to the above server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ echo foo | http --verbose --stream :3000 Content-Type:text/plain
POST / HTTP/1.1
Accept: application/json, */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Content-Length: 4
Host: localhost:3000
User-Agent: HTTPie/0.9.6
Content-Type: text/plain
foo
HTTP/1.1 200 OK
Connection: keep-alive
Transfer-Encoding: chunked
foo

We can also add the Content-Type response header to echo back what the entire media type is after assembling all chunks.

1
2
3
4
5
6
7
require('http').createServer((req, res) => {
req.pipe(res); // Pipe request directly to response
if (req.headers['content-type']) {
res.setHeader('Content-Type', req.headers['content-type']);
}
}).listen(3000);

The response should have the Content-Type field as below:

1
2
3
4
5
6
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: text/plain
Transfer-Encoding: chunked
foo

Notice that instead of usual Content-Length in the response header, we’ve got Transfer-Encoding: chunked. The default transfer encoding for Node.js HTTP is chunked:

Sending a ‘Content-length’ header will disable the default chunked encoding.[^1]

About transfer encoding:

Chunked transfer encoding is a data transfer mechanism in version 1.1 of the Hypertext Transfer Protocol (HTTP) in which data is sent in a series of “chunks”. It uses the Transfer-Encoding HTTP header in place of the Content-Length header, which the earlier version of the protocol would otherwise require. Because the Content-Length header is not used, the sender does not need to know the length of the content before it starts transmitting a response to the receiver. Senders can begin transmitting dynamically-generated content before knowing the total size of that content. … The size of each chunk is sent right before the chunk itself so that the receiver can tell when it has finished receiving data for that chunk. The data transfer is terminated by a final chunk of length zero.[^2]

With the above starting script, now you can attach some transform streams to manipulate the request and stream back in chunked response.

Settings:

1
2
3
4
$ node --version
v6.3.1
$ http --version
0.9.6

[^1]: HTTP, Node.js API Docs

[^2]: Chunked transfer encoding, Wikipedia

Installing Caddy 0.9.x on Ubuntu/Debian System

Install Caddy via its installer script on Ubuntu/Debian system:

1
2
3
4
5
6
7
$ curl -s https://getcaddy.com/ | sudo bash
Downloading Caddy for linux/amd64...
https://caddyserver.com/download/build?os=linux&arch=amd64&arm=&features=
Extracting...
Putting caddy in /usr/local/bin (may require password)
Caddy 0.9.1 (+e8e5595)
Successfully installed

This is different from the Download page, where you get to select additional features (see the &features= URL query parameter).

1
2
$ which caddy
caddy is /usr/local/bin/caddy

Get the installed version:

1
2
$ caddy --version
Caddy 0.9.1 (+e8e5595)

Get help:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$ caddy -h
Usage of caddy:
-agree
Agree to the CA's Subscriber Agreement
-ca string
URL to certificate authority's ACME server directory (default "https://acme-v01.api.letsencrypt.org/directory")
-conf string
Caddyfile to load (default "Caddyfile")
-cpu string
CPU cap (default "100%")
-email string
Default ACME CA account email address
-grace duration
Maximum duration of graceful shutdown (default 5s)
-host string
Default host
-http2
Use HTTP/2 (default true)
-log string
Process log file
-pidfile string
Path to write pid file
-plugins
List installed plugins
-port string
Default port (default "2015")
-quic
Use experimental QUIC
-quiet
Quiet mode (no initialization output)
-revoke string
Hostname for which to revoke the certificate
-root string
Root path of default site (default ".")
-type string
Type of server to run (default "http")
-version
Show version

Run Caddy locally:

1
2
3
4
$ caddy
Activating privacy features... done.
http://:2015
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".

A file descriptor is simply a number that the operating system assigns to an open file to keep track of it. Caddy’s primary goal is to be an easy-to-use static file web server. Having high file descriptor limit means it can open more files to serve users at the same time.

1
2
3
$ ulimit -Sn && ulimit -Hn
1024
4096

The current system is too low in both soft and hard limits. But since it’s not in production, warning can be ignored.

Make sure the server working:

1
2
3
4
5
6
7
8
$ http :2015
HTTP/1.1 404 Not Found
Content-Length: 14
Content-Type: text/plain; charset=utf-8
Server: Caddy
X-Content-Type-Options: nosniff
404 Not Found

Response header X-Content-Type-Options: nosniff prevents MIME based attacks, it tells the browser to respect the response content type, not to override.

Status code 404 means working, but just lacks an index file. Let’s create one:

Accessing Upwork JSON Data without the API

Upwork, formerly Elance-oDesk, is the world’s largest freelancing marketplace. I’m interested to know what types of jobs they are in the platform, and how many. For a lazy programmer, browsing each job category and clicking on each link, and copying those numbers is not the way to go. I need to automate this. There is an API. But before diving into the API documentation, let’s see if there is another way (“Rule of Diversity”).

Before continuing, a word of warning, this is prohibited:

Using any robot, spider, scraper, or other automated means to access the Site for any purpose without our express written permission or collecting or harvesting any personally identifiable information, including Account names, from the Site;[^2]

After poking around the web app, it communicates with its backend by using JSON data exchange format via the URL: https://www.upwork.com/o/jobs/browse/url. However, if accessing the URL directly, it will respond with 404 page not exist error. Something is missing.

Well, the web app is able to successfully make the request, so this is not difficult to tackle. Just use the process of elimination from the working request, it will reveal the required information.

After a couple tries, just need to add the request header: X-Requested-With: XMLHttpRequest, then the JSON response with the status code 200 will be returned:

1
2
3
4
5
6
7
8
9
$ http --verbose https://www.upwork.com/o/jobs/browse/url \
X-Requested-With:XMLHttpRequest
GET /o/jobs/browse/url HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: www.upwork.com
User-Agent: HTTPie/0.9.6
X-Requested-With: XMLHttpRequest

The default sort is by creation time in descending order, so you don’t need to add the query parameters: sort==create_time+desc (HTTPie).

Let’s load the response data into Node.js and perform a quick analysis:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ node
> data = require('./upwork.json')
{ url: '/o/jobs/browse/',
searchResults:
{ q: '',
paging: { total: 87654, offset: 0 },
spellcheck: { corrected_queries: [] },
jobs:
[ [Object],
[Object],
[Object],
[Object],
[Object],
[Object],
[Object],
[Object],
[Object],
[Object] ],
smartSearch: { downloadTeamApplication: false },
facets:
{ jobType: [Object],
workload: [Object],
duration: [Object],
clientHires: [Object],
contractorTier: [Object],
categories: [Object],
previousClients: [Object],
subcategories: [] },
isSearchWithEmptyParams: true,
subcategories: [],
currentQuery: {},
rssLink: '/ab/feed/jobs/rss?api_params=1&q=',
atomLink: '/ab/feed/jobs/atom?api_params=1&q=',
queryParsedParams: [],
pageTitle: 'Freelance Jobs - Upwork' } }

The property searchResults.paging.total is the total number of jobs available:

1
2
> data.searchResults.paging
{ total: 87654, offset: 0 }

But, the number is different from the web app, a lot less, 50% less jobs found. Is that because the request is not recognized as a logged-in user? Let’s find out.

Installing jq from Source

Packages built in both Ubuntu and Debian packages lack behind, therefore, to get the latest version of jq, build from source.

There are a few prerequisites to install:

  • GCC
  • Make
  • Autotools

Both GCC and Make are usually installed if you do development, but not Autotools. Luckily, this is easy to fulfill:

1
$ sudo apt-get install automake

Install from source:

1
2
3
4
$ sudo git clone https://github.com/stedolan/jq.git
$ cd jq
$ sudo git checkout jq-1.5
$ sudo ./configure && sudo make && sudo make install

The installed path is at:

1
2
$ which jq
jq is /usr/local/bin/jq

However, this gives me an unexpected tag:

1
2
$ jq --version
jq-1.5-dirty

Will Docker Container Restart Pick Up Updated Image?

When a Docker image has been updated, will restarting the running container via docker restart pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.

Let’s start with a Dockerfile:

1
2
3
# Version Foo
FROM debian:8.5
CMD while true; do echo foo; sleep 5; done

The command will keep printing foo every 5 seconds.

Create the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo foo; sleep 5; done
---> Running in 38fdeb15f629
---> 6a56a50ef254
Removing intermediate container 38fdeb15f629
Successfully built 6a56a50ef254

Notice the image ID starting with 6a56.

Start the container:

1
2
$ docker run -d --name example example
dac42e7194e4ec2bdca8e24db29a3333ae2f422d316e341c5cb1499034a4357b

Check the log:

1
2
3
$ docker logs example
foo
foo

This is expected output.

Inspect the container:

1
$ docker inspect example

The important field is the corresponding image, which matches to the previous built image:

1
2
3
4
5
{
...
"Image": "sha256:6a56a50ef254bb1d07117b0a0750ef81fafe9735ab3b0f2b0a14511f38d5b83d"
...
}

Now update the Dockerfile:

1
2
3
# Version Bar
FROM debian:8.5
CMD while true; do echo bar; sleep 5; done

This time it prints bar instead of foo.

Rebuild the image:

1
2
3
4
5
6
7
8
9
$ docker build -t example .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:8.5
---> 1b088884749b
Step 2 : CMD while true; do echo bar; sleep 5; done
---> Running in 7fc297e12005
---> a6c04345afb9
Removing intermediate container 7fc297e12005
Successfully built a6c04345afb9

Now we have a different image. The image ID is different: a6c0. But the old image is still there:

1
2
3
4
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
example latest a6c04345afb9 24 seconds ago 125.1 MB
<none> <none> 6a56a50ef254 3 minutes ago 125.1 MB

Restart the container:

1
2
$ docker restart example
example

Got bar? No still foo all the way with the log. And when you inspect the container, it still uses the old image.

So, docker restart will not pick up the changes from updated image, it will still use the old image built previously. Therefore, the correct way is to drop the container entirely and run it again:

1
2
3
4
$ docker stop example && docker rm example && docker run -d --name example example
example
example
55cec9110fed0257060673a085a08f143003336b1720894f43c6ac5a22104935

The log shows the correct message:

1
2
3
$ docker logs example
bar
bar

Inspecting the container, now it has the correct image:

1
2
3
4
5
6
$ docker inspect example
{
...
"Image": "sha256:a6c04345afb953ab392241f56c04f72110c772a6ee3a36e248c1ffd03f81b7d6"
...
}

And don’t forget to delete the old image.

Settings:

1
2
$ docker --version
Docker version 1.12.0, build 8eab29e

Fixing Authorization Failure in AWS CLI by Synchronizing the Clock

Running into an error when executing an AWS command:

1
2
3
4
$ aws ec2 describe-instances
An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS
was not able to validate the provided access credentials

From the error message, it appears to be an error with access credentials. But after updating to a new credential, and even updated the AWS package, the error still persisted. After trying out other commands, there was an error message containing “signature not yet current” with timestamps. So, the actual problem was due to inaccurate local clock. Hence, the solution is to sync the local date and time by polling the Network Time Protocol (NTP) server:

1
$ sudo ntpdate pool.ntp.org

ntpdate can be run manually as necessary to set the host clock, or it can be run from the host startup script to set the clock at boot time. This is useful in some cases to set the clock initially before starting the NTP daemon ntpd. It is also possible to run ntpdate from a cron script. However, it is important to note that ntpdate with contrived cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing resource use. Finally, since ntpdate does not discipline the host clock frequency as does ntpd, the accuracy using ntpdate is limited.[^1]

From the description, we can learn that we can make things even easier by installing NTP package:

1
$ sudo apt-get install -y ntp

Network Time Protocol daemon and utility programs NTP, the Network Time Protocol, is used to keep computer clocks accurate by synchronizing them over the Internet or a local network, or by following an accurate hardware receiver that interprets GPS, DCF-77, NIST or similar time signals.[^2]

Verify the installation and execution:

1
2
$ ps -e | grep ntpd
4964 ? 00:00:00 ntpd

with the environment:

1
2
$ aws --version
aws-cli/1.10.53 Python/2.7.6 Linux/3.13.0-92-generic botocore/1.4.43

[^1]: $ man nptdate
[^2]: $ apt-cache show ntp