Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is a fully-featured, extensible client for Let’s Encrypt CA that can automate the tasks of getting, renewing and even installing SSL certificates.
But it’s free. Yes, it’s free. Free software works better. Free certificate authority works better than others.
GAE is a managed service. The place to stored SSL certificate is in separate machines (load balancers). The current automated domain validation by Certbot mostly work with a single machine. Therefore, when the machine issues certificate request is not the same machine to be validated, we need find another way, hopefully an automated method to perform domain validation across machines.
Before creating an automated method, let’s see if we can do it manually. Certbot supports a number of different plugins that can be used to obtain and/or install certificates. A plugin is like an extension that supports a particular web server. Let’s see if we can find a plugin that supports GAE.
Here are some supported by Certbot:
$ certbot --help plugins
Certbot client supports an extensible plugins architecture. See 'certbot
plugins' for a list of all installed plugins and their names. You can
force a particular plugin by setting options provided below. Running
--help will list flags specific to that plugin.
--apache Obtain and install certs using Apache (default: False)
--nginx Obtain and install certs using Nginx (default: False)
--standalone Obtain certs using a "standalone" webserver. (default:
--manual Provide laborious manual instructions for obtaining a
cert (default: False)
--webroot Obtain certs by placing files in a webroot directory.
And there are also a number of third-party plugins, see the User Guide in Certbot Documentation. But there is none for GAE. It looks like there are only three possible options to try: standalone, webroot and manual.
Let’s start with the standalone method, and issue that from the local machine:
If you’re the first time running the command, you will be prompted for email and agreement screens. Both email and agreement can be automated via --email and --agree-tos options. That’s the automated part.
After freeing up the ports 80 and 443, run into some issues:
:: The server could not connect to the client to verify the domain :: Failed to
connect to 0.0.0.0:443 for TLS-SNI-01 challenge, example.com (tls-sni-01):
urn:acme:error:connection :: The server could not connect to the client to verify the
domain :: Failed to connect to 0.0.0.0:443 for TLS-SNI-01 challenge
- The following errors were reported by the server:
Detail: Failed to connect to 0.0.0.0:443 for TLS-SNI-01
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
The standalone plugin runs its own simple web server to prove that you control the domain. Ownership or domain validation is the key here. It needs the current computer that just issued the certbot command to have a publicly routable IP address. That’s not going to be happening in my local computer behind NAT. And webroot plugin needs a running web server. It can’t be run from the local machine as well. Domain validation are done automatically with both standalone and webroot plugins. Furthermore, domain validation requests are coming from Let’s Encrypt servers, therefore, you can’t have the machine issuing the certificate request behind a NAT or load balancing methods without properly routing the requests.
Since automated methods mostly require the requester and domain owner to be residing on the same machine, we can try to move the request to the Google cloud. Otherwise, there is one more plugin to try, the manual plugin. The manual method (plugin) helps you obtain a cert by giving you instructions to perform domain validation yourself.
Let’s Encrypt is a free, open, and automated certificate authority. And its Certbot is “a fully-featured, extensible client for the Let’s Encrypt CA (or any other CA that speaks the ACME protocol) that can automate the tasks of obtaining certificates and configuring webservers to use them.”[^1]
There are a number of ways to obtain and install SSL certificates issued by Let’s Encrypt CA. This is about installing Certbot 0.8.0 release on Debian Jessie. But before continuing, a few things to think about:
The Let’s Encrypt Client (Certbot) presently only runs on Unix-ish OSes that include Python 2.6 or 2.7; Python 3.x support will hopefully be added in the future. … currently it supports modern OSes based on Debian, Fedora, SUSE, Gentoo and Darwin.[^1]
That’s why using Docker container installation method might be a better choice, because it does not mess up your existing libraries and it can use supported operating systems which might not be the one you are using.
Backports are recompiled packages from testing (mostly) and unstable (in a few cases only, e.g. security updates) in a stable environment so that they will run without new libraries (whenever it is possible) on a Debian stable distribution.
Backports cannot be tested as extensively as Debian stable, and backports are provided on an as-is basis, with risk of incompatibilities with other components in Debian stable. Use with care!
It is therefore recommended to select single backported packages that fit your needs, and not use all available backports.
Again, there’s why it might be a better idea to use a container. But, let’s proceed.
Add a new file named backports.list to /etc/apt/sources.list.d/ directory:
The response should have the Content-Type field as below:
HTTP/1.1 200 OK
Notice that instead of usual Content-Length in the response header, we’ve got Transfer-Encoding: chunked. The default transfer encoding for Node.js HTTP is chunked:
Sending a ‘Content-length’ header will disable the default chunked encoding.[^1]
About transfer encoding:
Chunked transfer encoding is a data transfer mechanism in version 1.1 of the Hypertext Transfer Protocol (HTTP) in which data is sent in a series of “chunks”. It uses the Transfer-Encoding HTTP header in place of the Content-Length header, which the earlier version of the protocol would otherwise require. Because the Content-Length header is not used, the sender does not need to know the length of the content before it starts transmitting a response to the receiver. Senders can begin transmitting dynamically-generated content before knowing the total size of that content. … The size of each chunk is sent right before the chunk itself so that the receiver can tell when it has finished receiving data for that chunk. The data transfer is terminated by a final chunk of length zero.[^2]
With the above starting script, now you can attach some transform streams to manipulate the request and stream back in chunked response.
Putting caddy in /usr/local/bin (may require password)
Caddy 0.9.1 (+e8e5595)
This is different from the Download page, where you get to select additional features (see the &features= URL query parameter).
$ which caddy
caddy is /usr/local/bin/caddy
Get the installed version:
$ caddy --version
Caddy 0.9.1 (+e8e5595)
$ caddy -h
Usage of caddy:
Agree to the CA's Subscriber Agreement
URL to certificate authority's ACME server directory (default"https://acme-v01.api.letsencrypt.org/directory")
Caddyfile to load (default"Caddyfile")
CPU cap (default"100%")
Default ACME CA account email address
Maximum duration of graceful shutdown (default5s)
Use HTTP/2 (default true)
Process log file
Path to write pid file
List installed plugins
Use experimental QUIC
Quiet mode (no initialization output)
Hostname for which to revoke the certificate
Root path ofdefault site (default".")
Typeof server to run (default"http")
Run Caddy locally:
Activating privacy features... done.
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".
A file descriptor is simply a number that the operating system assigns to an open file to keep track of it. Caddy’s primary goal is to be an easy-to-use static file web server. Having high file descriptor limit means it can open more files to serve users at the same time.
$ ulimit -Sn && ulimit -Hn
The current system is too low in both soft and hard limits. But since it’s not in production, warning can be ignored.
Make sure the server working:
$ http :2015
Content-Type: text/plain; charset=utf-8
Response header X-Content-Type-Options: nosniff prevents MIME based attacks, it tells the browser to respect the response content type, not to override.
Status code 404 means working, but just lacks an index file. Let’s create one:
Upwork, formerly Elance-oDesk, is the world’s largest freelancing marketplace. I’m interested to know what types of jobs they are in the platform, and how many. For a lazy programmer, browsing each job category and clicking on each link, and copying those numbers is not the way to go. I need to automate this. There is an API. But before diving into the API documentation, let’s see if there is another way (“Rule of Diversity”).
Before continuing, a word of warning, this is prohibited:
Using any robot, spider, scraper, or other automated means to access the Site for any purpose without our express written permission or collecting or harvesting any personally identifiable information, including Account names, from the Site;[^2]
After poking around the web app, it communicates with its backend by using JSON data exchange format via the URL: https://www.upwork.com/o/jobs/browse/url. However, if accessing the URL directly, it will respond with 404 page not exist error. Something is missing.
Well, the web app is able to successfully make the request, so this is not difficult to tackle. Just use the process of elimination from the working request, it will reveal the required information.
After a couple tries, just need to add the request header: X-Requested-With: XMLHttpRequest, then the JSON response with the status code 200 will be returned:
When a Docker image has been updated, will restarting the running container via docker restart pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.
Running into an error when executing an AWS command:
$ aws ec2 describe-instances
An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS
was not able to validate the provided access credentials
From the error message, it appears to be an error with access credentials. But after updating to a new credential, and even updated the AWS package, the error still persisted. After trying out other commands, there was an error message containing “signature not yet current” with timestamps. So, the actual problem was due to inaccurate local clock. Hence, the solution is to sync the local date and time by polling the Network Time Protocol (NTP) server:
$ sudo ntpdate pool.ntp.org
ntpdate can be run manually as necessary to set the host clock, or it can be run from the host startup script to set the clock at boot time. This is useful in some cases to set the clock initially before starting the NTP daemon ntpd. It is also possible to run ntpdate from a cron script. However, it is important to note that ntpdate with contrived cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing resource use. Finally, since ntpdate does not discipline the host clock frequency as does ntpd, the accuracy using ntpdate is limited.[^1]
From the description, we can learn that we can make things even easier by installing NTP package:
$ sudo apt-get install -y ntp
Network Time Protocol daemon and utility programs NTP, the Network Time Protocol, is used to keep computer clocks accurate by synchronizing them over the Internet or a local network, or by following an accurate hardware receiver that interprets GPS, DCF-77, NIST or similar time signals.[^2]