Randomizing an Array with Sort
How to randomize an array? Use the sort
command, with the option:
|
|
For example:
|
|
How to randomize an array? Use the sort
command, with the option:
|
|
For example:
|
|
This is a Node.js starting script to stream HTTP request directly into response:
|
|
It behaves almost like an echo, you get back whatever you sent. For example, use HTTPie to make a request to the above server:
|
|
We can also add the Content-Type
response header to echo back what the entire media type is after assembling all chunks.
|
|
The response should have the Content-Type
field as below:
|
|
Notice that instead of usual Content-Length
in the response header, we’ve got Transfer-Encoding: chunked
. The default transfer encoding for Node.js HTTP is chunked
:
Sending a ‘Content-length’ header will disable the default chunked encoding.[^1]
About transfer encoding:
Chunked transfer encoding is a data transfer mechanism in version 1.1 of the Hypertext Transfer Protocol (HTTP) in which data is sent in a series of “chunks”. It uses the Transfer-Encoding HTTP header in place of the Content-Length header, which the earlier version of the protocol would otherwise require. Because the Content-Length header is not used, the sender does not need to know the length of the content before it starts transmitting a response to the receiver. Senders can begin transmitting dynamically-generated content before knowing the total size of that content. … The size of each chunk is sent right before the chunk itself so that the receiver can tell when it has finished receiving data for that chunk. The data transfer is terminated by a final chunk of length zero.[^2]
With the above starting script, now you can attach some transform streams to manipulate the request and stream back in chunked response.
Settings:
|
|
[^1]: HTTP, Node.js API Docs
[^2]: Chunked transfer encoding, Wikipedia
Install Caddy via its installer script on Ubuntu/Debian system:
|
|
This is different from the Download page, where you get to select additional features (see the &features=
URL query parameter).
|
|
Get the installed version:
|
|
Get help:
|
|
Run Caddy locally:
|
|
A file descriptor is simply a number that the operating system assigns to an open file to keep track of it. Caddy’s primary goal is to be an easy-to-use static file web server. Having high file descriptor limit means it can open more files to serve users at the same time.
|
|
The current system is too low in both soft and hard limits. But since it’s not in production, warning can be ignored.
Make sure the server working:
|
|
Response header X-Content-Type-Options: nosniff
prevents MIME based attacks, it tells the browser to respect the response content type, not to override.
Status code 404
means working, but just lacks an index file. Let’s create one:
Upwork, formerly Elance-oDesk, is the world’s largest freelancing marketplace. I’m interested to know what types of jobs they are in the platform, and how many. For a lazy programmer, browsing each job category and clicking on each link, and copying those numbers is not the way to go. I need to automate this. There is an API. But before diving into the API documentation, let’s see if there is another way (“Rule of Diversity”).
Before continuing, a word of warning, this is prohibited:
Using any robot, spider, scraper, or other automated means to access the Site for any purpose without our express written permission or collecting or harvesting any personally identifiable information, including Account names, from the Site;[^2]
After poking around the web app, it communicates with its backend by using JSON data exchange format via the URL: https://www.upwork.com/o/jobs/browse/url. However, if accessing the URL directly, it will respond with 404 page not exist error. Something is missing.
Well, the web app is able to successfully make the request, so this is not difficult to tackle. Just use the process of elimination from the working request, it will reveal the required information.
After a couple tries, just need to add the request header: X-Requested-With: XMLHttpRequest
, then the JSON response with the status code 200
will be returned:
|
|
The default sort is by creation time in descending order, so you don’t need to add the query parameters: sort==create_time+desc
(HTTPie).
Let’s load the response data into Node.js and perform a quick analysis:
|
|
The property searchResults.paging.total
is the total number of jobs available:
|
|
But, the number is different from the web app, a lot less, 50% less jobs found. Is that because the request is not recognized as a logged-in user? Let’s find out.
Packages built in both Ubuntu and Debian packages lack behind, therefore, to get the latest version of jq, build from source.
There are a few prerequisites to install:
Both GCC and Make are usually installed if you do development, but not Autotools. Luckily, this is easy to fulfill:
|
|
Install from source:
|
|
The installed path is at:
|
|
However, this gives me an unexpected tag:
|
|
When a Docker image has been updated, will restarting the running container via docker restart
pick up the change? Educated guess will be no, because like restarting a process, the memory is still retained. The best way to find out is to give a try.
Let’s start with a Dockerfile:
|
|
The command will keep printing foo
every 5 seconds.
Create the image:
|
|
Notice the image ID starting with 6a56
.
Start the container:
|
|
Check the log:
|
|
This is expected output.
Inspect the container:
|
|
The important field is the corresponding image, which matches to the previous built image:
|
|
Now update the Dockerfile:
|
|
This time it prints bar
instead of foo
.
Rebuild the image:
|
|
Now we have a different image. The image ID is different: a6c0
. But the old image is still there:
|
|
Restart the container:
|
|
Got bar
? No still foo
all the way with the log. And when you inspect the container, it still uses the old image.
So, docker restart
will not pick up the changes from updated image, it will still use the old image built previously. Therefore, the correct way is to drop the container entirely and run it again:
|
|
The log shows the correct message:
|
|
Inspecting the container, now it has the correct image:
|
|
And don’t forget to delete the old image.
Settings:
|
|
Running into an error when executing an AWS command:
|
|
From the error message, it appears to be an error with access credentials. But after updating to a new credential, and even updated the AWS package, the error still persisted. After trying out other commands, there was an error message containing “signature not yet current” with timestamps. So, the actual problem was due to inaccurate local clock. Hence, the solution is to sync the local date and time by polling the Network Time Protocol (NTP) server:
|
|
ntpdate can be run manually as necessary to set the host clock, or it can be run from the host startup script to set the clock at boot time. This is useful in some cases to set the clock initially before starting the NTP daemon ntpd. It is also possible to run ntpdate from a cron script. However, it is important to note that ntpdate with contrived cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing resource use. Finally, since ntpdate does not discipline the host clock frequency as does ntpd, the accuracy using ntpdate is limited.[^1]
From the description, we can learn that we can make things even easier by installing NTP package:
|
|
Network Time Protocol daemon and utility programs NTP, the Network Time Protocol, is used to keep computer clocks accurate by synchronizing them over the Internet or a local network, or by following an accurate hardware receiver that interprets GPS, DCF-77, NIST or similar time signals.[^2]
Verify the installation and execution:
|
|
with the environment:
|
|
[^1]: $ man nptdate
[^2]: $ apt-cache show ntp
Create a Docker data volume container in Dockerfile is unbelievably simple, just use the VOLUME instruction:
|
|
The instruction creates a mount point and attach the volumes from native host or other containers.
Build the data container:
|
|
The built size is just about 125.1 MB.
|
|
The first data
is the name of the container, the second data
is the name of Docker image.
To attach the data volume container to another, we use --volumes-from
option:
|
|
If there’re initial data to copy, then add the COPY
instruction:
|
|
Settings:
|
|
Escape characters are part of the syntax for many programming languages, data formats, and communication protocols. For a given alphabet an escape character’s purpose is to start character sequences (so named escape sequences), which have to be interpreted differently from the same characters occurring without the prefixed escape character.[^2]
JSON or JavaScript Object Notation is a data interchange format. It has an escape character as well.
In many programming languages such as C, Perl, and PHP and in Unix scripting languages, the backslash is an escape character, used to indicate that the character following it should be treated specially (if it would otherwise be treated normally), or normally (if it would otherwise be treated specially).[^3]
JavaScript also uses backslash as an escape character. JSON is based on a subset of the JavaScript Programming Language, therefore, JSON also uses backslash as the escape character:
A string is a sequence of zero or more Unicode characters, wrapped in double quotes, using backslash escapes.[^1]
A character can be:
"
or \
or control character\"
\\
\/
\b
\f
\n
\r
\t
\u
+ four-hex-digitsOnly a few characters can be escaped in JSON. If the character is not one of the listed:
|
|
it returns a SyntaxError
[^4]:
|
|
What’s the latest release of Docker?
Its homepage doesn’t tell you anything. Have to poke around, click on a few links, may or may not get you what you want. If there’s a quick way, even better a CLI method, that will be great.
Couple things we can do. First, when installing docker, we use the URL https://get.docker.com/. It has a path that will return an installation instruction with the version number:
|
|
There is another way. Well, there is always another way. Docker project is hosted in GitHub, we can use this URL:
|
|
which will be redirected to the latest release:
|
|
Since it’s a redirect, we can use HTTP HEAD method without download the entire response body:
|
|
Extract and process the value of the Location
field will get us what we are looking for.
Let’s construct a simple command to obtain such an information:
|
|
or:
|
|
Both commands will return v1.11.2
.
By using GitHub, not only we can get the latest stable release version of Docker, we can also obtain other projects. In fact, if the project was hosted in GitHub, and it was tagged properly with the releases, you can use this method to obtain the version. However, if it’s not properly tagged, such as Node.js, you need to find another way.