Tools

Use SSH Directly Instead of Vagrant SSH Command

Vagrant command vagrant ssh connects to the running virtual machine via SSH. SSH arguments can also be added:

1
2
3
4
5
6
7
8
$ vagrant ssh -h
Usage: vagrant ssh [options] [name] [-- extra ssh args]
Options:
-c, --command COMMAND Execute an SSH command directly
-p, --plain Plain mode, leaves authentication up to user
-h, --help Print this help

For example, execute a single command:

1
2
3
$ vagrant ssh -c date
Wed Dec 10 12:00:00 UTC 2014
Connection to 127.0.0.1 closed.

or:

1
2
$ vagrant ssh -- date
Wed Dec 10 12:00:00 UTC 2014

which prints no connection closed message.

Another example:

1
2
3
4
5
6
7
8
9
10
$ vagrant ssh -- 'cat /etc/hosts'
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

However, interactive applications cannot be used in this way:

1
2
$ vagrant ssh -- top
TERM environment variable not set.

Well, we already have a virtual machine running, we can just use SSH directly.

LiveReload with BrowserSync and Gulp

In my previous blog post, I have written about how to use LiveReload Chrome extension, with Guard and some Ruby gems to make a web page automatically reload in a browser, Chrome browser to be specific. Just by reading this sentence, it already sounds like a complicated task. And indeed it is. Luckily, I have found a better solution: BrowserSync.

With LiveReload, you have to install browser extension, but BrowserSync uses Socket.io, so it can supports more than one browser at once. This is great for working with responsive design, where screens with different sizes are needed to be tested.

No need to install extension and support more than one browser are really big plus.

The following is a short instruction and some examples on how to use both BrowserSync and Gulp to automatically reload a web documentation page generated by Docco in any connected browser.

Create a jq Docker Image with Automated Build

I have created a jq Docker image based on BusyBox with automated builds. BusyBox is really really small in size, so the jq image I have created is also very small, just a little over 6 MB.

Here is the source code and image repositories:

Building Docker image is pretty straight-forward, just follow the instruction:

https://docs.docker.com/userguide/dockerrepos/#automated-builds

I have also created a tag v1.4 in my GitHub repository to match the release of jq binary. This should also be reflected in Docker registry. After couple tries, here is the build details for adding both latest and 1.4 tags:

1
2
3
4
Type Name Dockerfile Location Tag Name
------------------------------------------------
Tag v1.4 / 1.4
Branch master / latest

The first two columns match git branch and tag, and the last column reflects Docker tags. See https://github.com/realguess/docker-jq/tags and https://registry.hub.docker.com/u/realguess/jq/tags/manage/.

This is done by starting an automated build with type of tag instead of branch. Everything is done via the Docker Hub website.

If I pull down this repository:

1
$ docker pull realguess/jq

It should give me two image layers with the two different image IDs, which makes sense, as latest commit usually is not the same as the tagged one.

After a while, the index should be built, and I can search it via:

1
2
3
$ docker search jq
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
realguess/jq 1 [OK]

The image is listed as AUTOMATED, but the description is missing here in the search command.

There are two types of descriptions:

  • Short description
  • Full description

Full description will be generated automatically via README.md file. I thought the short description can also be generated via README-short.txt, however, this is not the case. You can add it in the settings page for example:

https://registry.hub.docker.com/u/realguess/jq/settings/

Automated builds are triggered automatically with GitHub and BitBucket repository. Once any commit is pushed to either repository, a trigger will be sent to Docker Hub, and automated build will start.

Non-Interactive Redis Install

To build Redis from source, we first need to install TCL 8.5 or newer, this is needed for make test later:

$ sudo apt-get install -y tcl

Now clone and make:

$ git clone https://github.com/antirez/redis
$ git checkout 2.8.13
$ make
$ make test
$ sudo make install
$ sudo utils/install_server.sh

Binary (redis-cli and redis-server) will be installed into /usr/local/bin.

The last command with utils/install_server.sh is an interactive command. Since the script is a shell script using the read built-in command and -p option for prompt, we can make it non-interactive by redirecting input from echo command:

$ echo -n | sudo utils/install_server.sh

Without pumping any value into the script, the default values are used.

If really want to customize it, we can add our own values:

$ echo -e \
  "${PORT}\n${CONFIG_FILE}\n${LOG_FILE}\n${DATA_DIR}\n${EXECUTABLE}\n" | \
  sudo utils/install_server.sh

There are 6 read statements, hence n - 1 newline characters. Without using -n, the last newline character is supplied by echo.

Here are the default values:

PORT=6379
CONFIG_FILE=/etc/redis/6379.conf
LOG_FILE=/var/log/redis_6379.log
DATA_DIR=/var/lib/redis/6379
EXECUTABLE=/usr/local/bin/redis-server

The utils/install_server.sh script should return something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Welcome to the redis service installer
This script will help you easily set up a running redis server
Selecting default: 6379
Selected default - /etc/redis/6379.conf
Selected default - /var/log/redis_6379.log
Selected default - /var/lib/redis/6379
Selected config:
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
System start/stop links for /etc/init.d/redis_6379 already exist.
Success!
/var/run/redis_6379.pid exists, process is already running or crashed
Installation successful!

Set an alias for Redis client:

$ cd /usr/local/bin && sudo ln -s redis-cli redis

For more advanced install, see the README file.

Install SSH Public Key to All AWS EC2 Instances

I’ve got a new laptop, and I need to install the SSH public key of the new machine to all my AWS EC2 instances in order to enable keyless access. I can use ssh-copy-id to install the public key one instance at a time, but I can also do it all at once:

$ aws ec2 describe-instances --output text \
  --query 'Reservations[*].Instances[*].{IP:PublicIpAddress}' \
  while read host; do \
  ssh-copy-id -i /path/to/key.pub $USER@$host; done

Somehow if using PublicIpAddress, some IP addresses in the response were cluttered in a single line. So, I use {IP:PublicIpAddress} instead.

For non-standard port:

$ ssh-copy-id -i /path/to/key.pub "$USER@$host -p $port"

The only problem is that it might install a duplicate key in ~/.ssh/authorized_keys file of the remote instance, if the key has already been installed. One way to solve this problem is to test the login from the new machine and generate only the IP addresses that the new machine does not have access to:

$ aws ec2 describe-instances --output text \
  --query 'Reservations[*].Instances[*].{IP:PublicIpAddress}' \
  | while read host; do ssh -q $USER@$host exit \
  || echo $host; done > instances.txt

Now back to the old machine, install the public key at once:

$ cat instances.txt | while read host; \
  do ssh-copy-id -i /path/to/key.pub $USER@$host; done

Amazon AWS Command Line Interface (CLI)

This is a brief guide for installing AWS Command Line Interface (CLI) on Ubuntu Linux.

The AWS Command Line Interface (CLI)] is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. [2]

The point here is unified, one tool to run all Amazon AWS services.

Install

The installation procedure applies to Ubuntu Linux with Zsh and Bash.

Install pip, a Python package manager:

$ sudo apt-get install python-pip

Install awscli:

$ sudo pip install awscli

Install autocompletion:

$ which aws_zsh_completer.sh
/usr/local/bin/aws_zsh_completer.sh
$ source aws_zsh_completer.sh

Add this line to ~/.zshrc as well.

Or for Bash ~/.bashrc:

$ echo '\n# Enable AWS CLI autocompletion' >> ~/.bashrc
$ echo 'complete -C aws_completer aws' >> ~/.bashrc
$ source ~/.bashrc

Test installation:

$ which aws
/usr/local/bin/aws
$ aws help

Test autocompletion:

$ aws <tab><tab>

You should see a list of all available AWS commands.

Usage

Before using aws-cli, you need to tell it about your AWS credentials. There are three ways to specify AWS credentials:

  1. Environment variables
  2. Config file
  3. IAM Role

Using config file is preferred, which is a simple ini file format to be stored in ~/.aws/config. A soft link can be used to link it or just tell awscli where to find it:

$ export AWS_CONFIG_FILE=/path/to/config_file

It is better to use IAM roles with any of the AWS services:

The final option for credentials is highly recommended if you are using aws-cli on an EC2 instance. IAM Roles are a great way to have credentials installed automatically on your instance. If you are using IAM Roles, aws-cli will find them and use them automatically. [4]

The default output is in JSON format. Other formats are tab-delimited text and ASCII-formatted table. For example, using --query filter and table output:

$ aws ec2 describe-instances --query 'Reservations[*].Instances[*].{ \
  ID:InstanceId, TYPE:InstanceType, ZONE:Placement.AvailabilityZone, \
  SECURITY:SecurityGroups[0].GroupId, KEY:KeyName, VPC:VpcId, \
  STATE:State.Name}' --output table

This will print a nice looking table of all EC2 instances.

The command line options also accept JSON format. But when passing in large blocks of data, referring a JSON file is much easier. Both local file and remote URL can be used.

Upgrade

Check the installed and the latest versions:

$ pip search awscli

Upgrade AWS CLI to the latest version:

$ sudo pip install --upgrade awscli

References

  1. AWS Command Line Interface
  2. User Guide
  3. Reference
  4. GitHub repository

Live Browser Reload and Command Execution on File Change

Execute Command

When I am editting comments in my code, I would like to use Docco to generate a pretty print source code documentation and review it in web browser. However, every time I made a change, I had to issue docco command again, even I could use the up arrow key, but still a pain. Lucky, there is a way to eliminate this step. One way is to use grunt-contrib-watch, but the limitation is that this is not for an individual file on the command line, it is more for a build process. A better alternative is to use nodemon:

For use during development of a node.js based application. nodemon will watch the files in the directory that nodemon was started, and if they change, it will automatically restart your node application.

It does not have to be limited to Node and JavaScript files. We can use it with any command:

nodemon -x docco /path/to/app.coffee

With this command, docco will be executed upon any change to the file.

Reload Browser

Making live reload in browser is a little bit trickier, it involves using some Ruby gems.

TODO: Need to find a way to avoid using Ruby and its gems.

I have followed some of the steps from the post: Auto-refresh your browser when saving files or restarting node.js.

First install LiveReload Chrome extension, and then install Guard and Guard::LiveReload gems:

sudo gem install guard guard-livereload

You need to have a web server to make the livereload work, live reload does not work on files directly served from the file system:

file:///home/chao/docs/app.html

Therefore, an easy way is to set up a web server, such as Nginx, and configure a directory for this usage, such as:

/usr/share/nginx/www/livereload

Add the following Guardfile into the directory:

guard 'livereload' do
  watch(%r{.+\.(css|js|ejs|html)})
end

Launch Guard:

cd /usr/share/nginx/www/livereload && guard

Enable the live reload by clicking the menu icon. You should see the dot in the middle becomes solid.

Now you need to make sure set the generated files into the correct directory:

nodemon -x 'docco -o /usr/share/nginx/www/livereload' /path/to/app.coffee

Another way is to create a soft link to the directory, since the docs in the current execution directory is the default output directory:

ln -s /usr/share/nginx/www/livereload docs

Then, you can use nodemon as you normally do:

nodemon -x docco /path/to/app.coffee

Having live browser reload is a bit complicated. I wish there is a method as easy as using nodemon via a single command. But in the meantime, I just need to have two panes open and have both monitoring tools running independently. No more browser fresh and command re-issuing.