Use CLI s3api to Retrieve Metadata of Amazon S3 Object

In Amazon S3 CLI, there are only a handful commands:

1
cp ls mb mv rb rm sync website

We can use cp command to retrieve S3 object:

1
$ aws s3 cp s3://mybucket/myfile.json myfile.json

But if only the metadata of the object, such as ETag or Content-Type is needed, the S3 CLI does not have any command to do that.

Now enter S3API CLI. Not only the CLI commands can retrieve S3 objects, but also associated metadata. For example, retrieve S3 object similar to aws s3 cp:

1
$ aws s3api get-object --bucket mybucket --key myfile.json

Just need the metadata of the object, use head-object, which retrieves metadata without the object itself, as HTTP HEAD method:

Passage of Time 2014

In 2014, there are 6,409 unique visitors making 7,497 visits resulting in 8,711 pageviews, about 24 daily pageviews.

Where are my visitors located?

33.37% of visits are from the United States. The second place is from India with 6.87%.

Where do my visitors come from?

Google is the top traffic source with 87.47% of all visits. Social is almost non-existing.

What devices and operating systems are my visitors using?

44.14% are using Windows, then followed closely by Macintosh with 36.63%. And 96.77% visitors are using desktop devices.

What is the top post?

The most popular post is still Split JSON File into Multiple Parts.

How is my site speed?

The average page load time is 4.33 seconds and the slowest is sampled at 19.71 seconds from 56 samples.

Statistics in December 2014

In December 2014, there are 837 unique visitors making 950 visits resulting in 1,128 pageviews, about 36 daily pageviews, virtually flat comparing to last month.

As today, my blog is still estimated by Alexa. It ranks my blog 781,909 globally and 146,776 in the United States. Higher than last month, but these are estimates.

Where are my visitors located?

29.27% of visits are from the United States, almost 5% drop compare to last month. The second place is from India with 8.60%, that is increase of more than 1%.

Where do my visitors come from?

Google is the top traffic source with 82% of all visits, almost 2% drop month to month. Social is still almost non-existing.

What devices and operating systems are my visitors using?

48.39% are using Windows, then followed closely by Macintosh with 32.14%. The gain in Windows is interesting. And 97.25% visitors are using desktop devices, not changing much.

What is the top post?

The most popular post is still Split a Large JSON File into Smaller Pieces, it is even more popular than the site’s main index page.

How is my site speed?

The average page load time is 2.93 seconds. The slowest is sampled at 4.09 seconds. This is better news than last month.

Is my blog getting more and more visitors?

Yes! The month-to-month gain is just 2.15%, and year-to-year gain is 603.70%, more than a half of thousdand fold.

Mount Multiple Data Volumes and Multiple Data Volume Containers in Docker

Multiple Data Volumes from a Single Container

Create and run multiple data volumes in a container:

1
$ docker run -d --name vol -v /vol1 -v /vol2 ubuntu

Mount the data volumes in a new container:

1
$ docker run -it --rm --name foo --volumes-from=vol ubuntu

Both volumes will be mounted under the root directory as /vol1 and /vol2.

Multiple Data Volume Containers

Create and run multiple data volume containers:

1
$ for i in {1..2}; do docker run -d --name vol${i} -v /vol${i} ubuntu; done

Mount multiple data volume containers:

1
$ docker run -it --rm --name foo --volumes-from=vol1 --volumes-from=vol2 ubuntu

Now there are also two volumes mounted under the root directory as /vol1 and /vol2, one from each container.

Multiple Data Volume Containers But Sharing the Same Data Volume Name

Create multiple data volume containers with the same data volume name /vol:

1
$ for i in {1..2}; do docker run -d --name vol${i} -v /vol ubuntu; done

Mount multiple data volume containers:

1
$ docker run -it --rm --name foo --volumes-from=vol1 --volumes-from=vol2 ubuntu

Revoke and Grant Public IP Addresses to Amazon EC2 Instances Via AWS Command Line Interface (CLI)

If you work from place to place, such as from one coffee shop to another, and you need access to your Amazon EC2 instances, but you don’t want to allow traffics from all IP addresses. You can use the EC2 Security Groups to allow the IP addresses from those locations. But once you move on to a different location, you want to delete the IP address from the previous location. The process to do these manually and over and over again quickly becomes cumbersome. Here is a command line method that quickly removes all other locations and allows only the traffic from your current location.

The steps are:

  1. Revoke all existing sources to a particular port
  2. Grant access to the port only from the current IP address

Assume the following:

  • Profile: default
  • Security group: mygroup
  • Protocol: tcp
  • Port: 22

First, revoke access to the port from all IP addresses:

1
2
3
4
5
6
7
8
9
10
11
$ aws ec2 describe-security-groups \
--profile default \
--group-names mygroup \
--query 'SecurityGroups[0].IpPermissions[?ToPort==`22`].IpRanges[].CidrIp' | \
jq .[] | \
xargs -n 1 aws ec2 revoke-security-group-ingress \
--profile default \
--group-name mygroup \
--protocol tcp \
--port 22 \
--cidr

The aws ec2 describe-security-groups command before the first pipe returns JSON formatted data, filtered via JMESPath query, which is supported by AWS CLI, for example:

1
2
3
4
[
"XXX.XXX.XXX.XXX/32",
"XXX.XXX.XXX.XXX/32"
]

jq command simply converts an array of JSON to line by line strings, which xarg takes in, loops through and deletes one IP address at a time.

After this step, all IP addresses originally allowed are all revoked. Next step is to grant access to the port from a single IP address:

Ansible: Update Servers to the Latest and Reboot

This is for Debian/Ubuntu flavored systems.

Keep a single server up to date is easy, but updating multiple servers at once, you need tools like Ansible. For each server, here is a list of basic steps:

  1. Check if there are packages available to be upgraded
  2. Upgrade all packages to the latest version
  3. Check if a reboot is required
  4. Reboot the server

When we log into the remote server, we might see the message showing the number of packages can be updated. The message is generated by:

1
2
3
4
$ sudo /usr/lib/update-notifier/update-motd-updates-available
25 packages can be updated.
18 updates are security updates.

And it is available at:

1
2
3
4
$ cat /var/lib/update-notifier/updates-available
25 packages can be updated.
18 updates are security updates.

We don’t need that detailed information, we just simply want to know if there are update available.

Shell script /usr/lib/update-notifier/apt-check shows any pending updates:

1
2
$ /usr/lib/update-notifier/apt-check
25;18

To list all the packages instead of simple packages;security format:

1
$ /usr/lib/update-notifier/apt-check --package-names

--package-names option will write data to stderr instead of stdout. If there are no packages needed to be installed, then the stderr should be empty.

If there are packages to be installed or upgraded. Ansible has the apt module to manage them in Debian/Ubuntu based systems.

1
2
3
4
5
6
7
- name: Check if there are packages available to be installed/upgraded
command: /usr/lib/update-notifier/apt-check --package-names
register: packages
- name: Upgrade all packages to the latest version
apt: update_cache=yes upgrade=dist
when: packages.stderr != ""

Update Outdated NPM Packages

NPM provides a command to check for outdated packages:

1
$ npm help outdated

However, by default, the command checks the entire dependency tree, not only the modules specified in package.json, but also the dependencies of those modules. If we only care about the top-level packages, we can add --depth option to show just that:

1
$ npm outdated --depth 0

This is similar to listing installed packages:

1
$ npm list --depth 0

Using the option, it will not print out the nested dependency tree, but only the top-level. The option is similar to tree -L 1, but zero indexed instead of one.

Another interesting thing about outdated command is the color coding in the output:

Use SSH Directly Instead of Vagrant SSH Command

Vagrant command vagrant ssh connects to the running virtual machine via SSH. SSH arguments can also be added:

1
2
3
4
5
6
7
8
$ vagrant ssh -h
Usage: vagrant ssh [options] [name] [-- extra ssh args]
Options:
-c, --command COMMAND Execute an SSH command directly
-p, --plain Plain mode, leaves authentication up to user
-h, --help Print this help

For example, execute a single command:

1
2
3
$ vagrant ssh -c date
Wed Dec 10 12:00:00 UTC 2014
Connection to 127.0.0.1 closed.

or:

1
2
$ vagrant ssh -- date
Wed Dec 10 12:00:00 UTC 2014

which prints no connection closed message.

Another example:

1
2
3
4
5
6
7
8
9
10
$ vagrant ssh -- 'cat /etc/hosts'
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

However, interactive applications cannot be used in this way:

1
2
$ vagrant ssh -- top
TERM environment variable not set.

Well, we already have a virtual machine running, we can just use SSH directly.

Vagrant

  1. What is Vagrant?
  2. What are the main features?
  3. Why do they build it?
  4. How would it benefit me?
  5. Where does this fit into the stack?

What is Vagrant?

Vagrant is a configuration management tool for creating, configuring, and managing complete development environments in virtual machine.

What are the main features?

Development environment version control

Configuration for setting up development environment is in code (Vagrantfile), so the development environment can be version controlled.

Consistent but distributed development environments

Consistent development environment, build one development environment, and distribute to the rest of the team. Create identical development environment for everyone on the team.

Distributed development environment, but linked by source control. “I like to think of Vagrant as the Git of development clouds. Centralized development and test environments create bottlenecks. Vagrant lets developers work at their own pace and in their own environment, while keeping all the environment synchronized with each other.” (By Jeff Sussna) [1].

Disposable computing resource

“Vagrant lowers development environment setup time, increases development/production parity, and brings the idea of disposable compute resources down to the desktop.” [1]

“At the end of the day, Vagrant can suspend, halt, or destroy the development environment, keeping the overall system clean. Never again can developers forget to shut down a stray server process and waste precious compute resources.” [1]

Simple development environment setup

Forget about README or INSTALL instructions, type vagrant up and you are good to go.

“Say goodbye to ‘works on my machine’ bugs.” [2] For designers, “no more bothering other developers to help you fix your environment so you can test designs. Just check out the code, vagrant up, and start designing.” [2]