Tools

Migrate to Cloud9 IDE

My blog is powered by Hexo:

Hexo is a fast, simple and powerful blog framework. You write posts in Markdown (or other languages) and Hexo generates static files with a beautiful theme in seconds. - https://hexo.io/docs/

To generate static files, I need a Node.js development environment. But as I move around from different computers and operating systems, this becomes inconvenient. I need a cloud development environment where I can access it from any device.

So, I have decided to move my blog to Cloud9 IDE, a development environment in the cloud. And this is the first blog from the environment.

Looking forward!

Git End-of-Line Normalization by AutoCRLF with Input

Using Vagrant file provision in a Windows host, the line endings are not converted from CRLF to LF (Linux systems). So, code cannot be executed properly in the guest Linux system. And also this is a frequently occurred message for working with Windows:

1
2
warning: LF will be replaced by CRLF in README.md.
The file will have its original line endings in your working directory.

One solution is to use UNIX line ending (LF) even in Windows, hence no conversion is needed. This is done by setting:

1
$ git config --global core.autocrlf input

Which means when committing the code to a remote repository, the line ending will be convert from CRLF to LF if any, but when checking out from the remote directory, no conversion from LF to CRLF is performed.

Launch Byobu Automatically on Vagrant SSH

I can use SSH directly instead of Vagrant SSH command for interactive programs such as top or running tmux sessions. But I frequently just want to run Tmux or Byobu upon login. I can do:

1
$ ssh -F ssh-config vagrant -t byobu

But still too much trouble to go through all SSH configuration steps. So, I ended up with two-step process:

1
$ vagrant ssh

Now inside the guest machine, I immediately type:

1
$ byobu

To bring up new Tmux session or attach existing sessions.

Docker Save, Load and Deploy

Need to deploy private Docker containers without a private registry? Try docker save and docker load.

Working set:

1
2
3
4
5
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
realguess/jq latest 1f3f837970bf 3 months ago 6.107 MB
realguess/jq 1.4 6071e18eae76 3 months ago 6.107 MB
busybox ubuntu-14.04 f6169d24347d 6 weeks ago 5.609 MB

Actions to perform:

  1. Save an image with a single tag
  2. Save an image with multiple tags
  3. Save multiple images

Save a single tagged image:

1
$ docker save realguess/jq:latest > realguess-jq-latest.tar

Save a single image with all tags:

1
$ docker save realguess/jq > realguess-jq.tar

The tagged one is slightly less in size:

1
2
3
$ ls -lh realguess*.tar
-rw-rw-r-- 1 chao chao 6.0M Feb 1 12:00 realguess-jq-latest.tar
-rw-rw-r-- 1 chao chao 6.5M Feb 1 12:00 realguess-jq.tar

Save multiple images:

1
$ docker save busybox realguess/jq > busy-realguess-jq-box.tar

The size is almost twice as much comparing to a single image tar:

1
2
$ ls -lh busy*
-rw-rw-r-- 1 chao chao 12M Feb 1 12:00 busy-realguess-jq-box.tar

Compress it:

1
$ out=busy-realguess-jq-box.tar && docker save busybox realguess/jq > $out && gzip $out

Much better in size:

1
2
$ ls -lh busy*
-rw-rw-r-- 1 chao chao 5.7M Feb 1 12:00 busy-realguess-jq-box.tar.gz

Do the reverse, load the tarred Docker images with docker load.

Mount Multiple Data Volumes and Multiple Data Volume Containers in Docker

Multiple Data Volumes from a Single Container

Create and run multiple data volumes in a container:

1
$ docker run -d --name vol -v /vol1 -v /vol2 ubuntu

Mount the data volumes in a new container:

1
$ docker run -it --rm --name foo --volumes-from=vol ubuntu

Both volumes will be mounted under the root directory as /vol1 and /vol2.

Multiple Data Volume Containers

Create and run multiple data volume containers:

1
$ for i in {1..2}; do docker run -d --name vol${i} -v /vol${i} ubuntu; done

Mount multiple data volume containers:

1
$ docker run -it --rm --name foo --volumes-from=vol1 --volumes-from=vol2 ubuntu

Now there are also two volumes mounted under the root directory as /vol1 and /vol2, one from each container.

Multiple Data Volume Containers But Sharing the Same Data Volume Name

Create multiple data volume containers with the same data volume name /vol:

1
$ for i in {1..2}; do docker run -d --name vol${i} -v /vol ubuntu; done

Mount multiple data volume containers:

1
$ docker run -it --rm --name foo --volumes-from=vol1 --volumes-from=vol2 ubuntu

Ansible: Update Servers to the Latest and Reboot

This is for Debian/Ubuntu flavored systems.

Keep a single server up to date is easy, but updating multiple servers at once, you need tools like Ansible. For each server, here is a list of basic steps:

  1. Check if there are packages available to be upgraded
  2. Upgrade all packages to the latest version
  3. Check if a reboot is required
  4. Reboot the server

When we log into the remote server, we might see the message showing the number of packages can be updated. The message is generated by:

1
2
3
4
$ sudo /usr/lib/update-notifier/update-motd-updates-available
25 packages can be updated.
18 updates are security updates.

And it is available at:

1
2
3
4
$ cat /var/lib/update-notifier/updates-available
25 packages can be updated.
18 updates are security updates.

We don’t need that detailed information, we just simply want to know if there are update available.

Shell script /usr/lib/update-notifier/apt-check shows any pending updates:

1
2
$ /usr/lib/update-notifier/apt-check
25;18

To list all the packages instead of simple packages;security format:

1
$ /usr/lib/update-notifier/apt-check --package-names

--package-names option will write data to stderr instead of stdout. If there are no packages needed to be installed, then the stderr should be empty.

If there are packages to be installed or upgraded. Ansible has the apt module to manage them in Debian/Ubuntu based systems.

1
2
3
4
5
6
7
- name: Check if there are packages available to be installed/upgraded
command: /usr/lib/update-notifier/apt-check --package-names
register: packages
- name: Upgrade all packages to the latest version
apt: update_cache=yes upgrade=dist
when: packages.stderr != ""

Update Outdated NPM Packages

NPM provides a command to check for outdated packages:

1
$ npm help outdated

However, by default, the command checks the entire dependency tree, not only the modules specified in package.json, but also the dependencies of those modules. If we only care about the top-level packages, we can add --depth option to show just that:

1
$ npm outdated --depth 0

This is similar to listing installed packages:

1
$ npm list --depth 0

Using the option, it will not print out the nested dependency tree, but only the top-level. The option is similar to tree -L 1, but zero indexed instead of one.

Another interesting thing about outdated command is the color coding in the output: