Configuration Setup and Loading Precedence

Loading precedence (latter overrides former):

  1. Load application default settings (config/default.yml)
  2. Load environment settings
  3. Load custom settings

Default settings of the application is provided by the upstream, therefore, it should not be modified.

There are four types of environments: development, testing, staging and production. Environment can be set by using the shell environment variable: ENV. If the environment variable is not specified, it is set to development as default. Environment settings override default settings.

Custom settings override environment settings. The search path for custom settings is:

  1. Environment variable SETTINGS
  2. config/settings.yml

If both paths are not available, that means the custom settings are not specified.

config/settings.yml is a custom settings, therefore, it will be ignored by git in .gitignore file:

config/settings.yml

The complete directory structure regarding to the configuration setup:

config/
├── default.yml
├── environments
│   ├── development.yml
│   ├── production.yml
│   ├── staging.yml
│   └── testing.yml
└── settings.yml

Quick start command:

$ node app

Fully customized start command:

$ ENV=production SETTINGS=/path/to/settings.yml node app

Written in CoffeeScript, Required in JavaScript

If you are writing in CoffeeScript, then requiring another module written in CoffeeScript works the same as as both scripts are in JavaScript:

1
cm = require 'coffee-module'

But if you are writing in JavaScript, and the dependent module is in CoffeeScript, you have to include CoffeeScript as a dependency with require statement:

1
2
require('coffee-script');
var cm = require('coffee-module');

For better compatibility, source codes written in CoffeeScript should be compiled into JavaScript. But what is the best practice? You don’t want to maintain two languages. When to compile? Here is two options:

  1. Compile before publish the module
  2. Compile after module install

The advantage of the first approach is that there is no dependency on CoffeeScript. The module has been compiled into JavaScript prior to submitting to module registry. However, this approach requires two repositories, one is source code repository, and another one is the published module repository. If you are working on a private project, it is less likely that you will publish your module to an public NPM registry or running your own private one. It is more likely to have a single source code repository. Therefore, the second approach might be better in this situation. However, coffee-script must be added in dependency, or it must be installed globally with coffee command available during preinstall phase. Although this approach is not recommended in npm-scripts, before setting up a private NPM registry, this is the way to go.

Here is the required fields in package.json:

1
2
3
4
5
{
"scripts": {
"preinstall": "coffee --compile --bare --output lib/ src/"
}
}

Poor Man's VPN

sshuttle is a very simple tool to create a VPN connection to a remote server that you have SSH access. You do not need to set up the complex VPN in a remote machine.

Requirements:

  • Root access on local machine
  • SSH access to a remote machine

Install:

$ sudo apt-get install sshuttle

Usage:

$ sshuttle --dns -vvr username@sshserver 0/0

0/0 is the shortcut for 0.0.0.0/0. --dns enables DNS queries to be proxied. Note that sudo is not needed but will be prompted.

Treading On Startup Scene in Hong Kong

On the morning of November 22, 2013, the Sun was yet to rise behind all the tall buildings. I was up and preparing to leave Hong Kong. Streets were quiet, the taxi driver was still stretching his arms while waiting us outside. I was in Hong Kong for a month and working remotely. As a startup founder, there is always a drive to explore the local startup culture and scene.

Hong Kong government is actively promoting and attracting talents into the region. They have built a Science & Technology Parks for concentrating researchers, entrepreneurs and student into a Silicon Valley alike location. The program like Incu-App is most closely resemble to startup incubator/accelerator.

Hong Kong Science & Technology Parks

This is a community in a small scale, you need the bigger community support. So, I went to watch the pitch event in CoCoon (a co-working space). There were 5 companies, 3 of them are ecommerce companies, selling products originated in Hong Kong or nearby to Western audience. The quality of pitch (presentation and slides) were quite bad.

CoCoon Pitch Event
CoCoon Shared Board
CoCoon Shared Desks

Set aside the pitch for now. I chatted with a few founders after the event, one of the issues people here face here is the lack of funding. Funding is difficult here, might be better than before, but still so far away comparing to the US. They actually limit the entrepreneurs here to bootstrap and get to profit earlier. Ecommerce is obviously an easy way to go. But it does not mean that you cannot dream bigger. It might just take longer to get to where you want to go. Sometimes it is not a bad thing. Good stuff takes a longer and more troubled journey.

I think this comes down to the mindset of people here. The mindset of entrepreneurs here are much narrower than entrepreneurs from Silicon Valley, New York or other places. Finance and entertainment are still dominant factors here. As the influence of mainland Chinese companies and integration of transportation, it is going to make it harder and harder for it becoming a tech hub.

Lamma Island Seafood

Let’s end with a positive note there. Food and tech go hand to hand. Just by walking downstairs, I was at a shopping mall. Inside the shopping mall, there are restaurants with variety of food. Don’t like those, take an indoor sky bridge or subway tunnels across the street to the adjacent mall, there are more restaurants and more food. You can get everything you need without going outside to the streets. But if you do, every street corner is a new exploration. For authenticated Chinese food, there is nothing you can beat that.

Hong Kong has its special place, but in tech, it needs a lot of work.

Ping

There are many network utilities that are ready to use out of Linux box. ping is one of them:

1
2
3
4
5
6
7
8
DESCRIPTION
ping uses the ICMP protocol's mandatory ECHO_REQUEST datagram to elicit
an ICMP ECHO_RESPONSE from a host or gateway. ECHO_REQUEST datagrams
(``pings'') have an IP and ICMP header, followed by a struct timeval
and then an arbitrary number of ``pad'' bytes used to fill out the
packet.
ping6 can also send Node Information Queries (RFC4620).

pings are ECHO_REQUEST datagrams.

For a quick usage:

% ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.034 ms
64 bytes from localhost (127.0.0.1): icmp_req=2 ttl=64 time=0.045 ms
64 bytes from localhost (127.0.0.1): icmp_req=3 ttl=64 time=0.039 ms
^C
--- localhost ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.034/0.039/0.045/0.006 ms

rrt is the round-trip time in milliseconds. The statistics includes minimum, average, maximum and mean deviation times.

The manual page explains many options. Here are onces I found helpful to me:

ping -D -n -q -c 100 localhost
  • -D: UNIX timestamp + microseconds
  • -n: no lookup symbolic names for host addresses (faster)
  • -c: number of pings to send
  • -q: result only

Ping localhost to verify that the local network interface is up and running:

% ping -d -n -q -c 100 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.

--- localhost ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99005ms
rtt min/avg/max/mdev = 0.029/0.047/0.059/0.007 ms

Ping local gateway, usually router:

% ping -D -n -q -c 100 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.

--- 192.168.1.1 ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 98997ms
rtt min/avg/max/mdev = 0.416/0.502/1.448/0.100 ms

If we ping a remote server without using -n option, it will be slower due to the host address lookup:

% ping -c 20 baidu.com
20 packets transmitted, 20 received, 0% packet loss, time 96034ms
rtt min/avg/max/mdev = 46.061/49.096/92.912/10.061 ms

Without lookup:

% ping -c 20 -n baidu.com
20 packets transmitted, 20 received, 0% packet loss, time 19029ms
rtt min/avg/max/mdev = 45.802/46.428/47.391/0.458 ms

Look up the actual IP address of the domain:

% nslookup baidu.com
Server:         127.0.0.1
Address:        127.0.0.1#53

Non-authoritative answer:
Name:   baidu.com
Address: 220.181.111.86
Name:   baidu.com
Address: 220.181.111.85
Name:   baidu.com
Address: 123.125.114.144   

We can send pings by using IP address directly, which is almost the same as using -n option. It is faster:

% ping -c 20 220.181.11.86
--- 220.181.111.86 ping statistics ---
20 packets transmitted, 20 received, 0% packet loss, time 19026ms
rtt min/avg/max/mdev = 46.103/53.019/86.077/12.831 ms

But not all servers support ping, pings might be dropped altogether. Also, even a server does support ping, it does not mean that you can browser the website hosted by the server. The port might get blocked, or server is down.

Pinging an IP address is fun, but what is more exciting to actually find out more information about the address, such as the geographical location of the IP. What Is My IP Address is good tool to use.

Next: ping6.

Why Shebang?

Frequently the initial two characters on the initial line of a script are: #!.

Why Shebang? Well, simple reason, because shell needs to know which interpreter to use when executing the script.

The sha-bang (#!) at the head of a script tells your system that this file is a set of commands to be fed to the command interpreter indicated. The #! is actually a two-byte magic number, a special marker that designates a file type, or in this case an executable shell script (type man magic for more details on this fascinating topic). Immediately following the sha-bang is a path name. This is the path to the program that interprets the commands in the script, whether it be a shell, a programming language, or a utility. This command interpreter then executes the commands in the script, starting at the top (the line following the sha-bang line), and ignoring comments. - Starting Off With a Sha-Bang

The shebang line is usually ignored by the interpreter because the # character is a comment marker in many scripting languages. Even the actual script is written with a different commenting character, such as in JavaScript:

1
2
#!/usr/bin/env node
// JavaScript stuff starts here.

Because the first line is interpreted by the shell, and the rest is passed into JavaScript interpreter, in this case, Node.

The syntax of shebang:

#! interpreter [optional-arg]

Whether there is a space between shebang character and the interpreter or not, it does not matter.

The interpreter must usually be an absolute path to a program that is not itself a script. The following are example interpreters:

1
2
#! /bin/sh
#! /bin/bash

More examples are via head -n 1 /etc/init.d/*.

The optional‑arg should either not be included or it should be a string that is meant to be a single argument (for reasons of portability, it should not contain any whitespace). - Shebang_(Unix)

We also frequently see the following shebang:

1
#!/usr/bin/env bash

That has to do with portability. Not every system install node or bash command in the same path. /usr/bin/env will search through user’s $PATH, and correctly locate the executable.

To conclude for Node, here is the format I am using:

1
2
3
4
5
6
7
8
#! /usr/bin/env node
// Title
// =====
//
// Markdown style description starts here.
'use strict';

Get Yesterday's Date by date Command

By using Linux date command, we can get today’s date in the following format:

1
2
$ date +%Y-%m-%d
2013-11-14

To get yesterday’s date, we can use the --date or -d option:

1
2
3
4
5
6
% date -d 'yesterday' +%Y-%m-%d
2013-11-13
% date -d '-1 day' +%Y-%m-%d
2013-11-13
% date -d '1 day ago' +%Y-%m-%d
2013-11-13

As the manual explains:

1
2
3
4
5
6
7
8
DATE STRING
The --date=STRING is a mostly free format human readable date string
such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29 16:21:42" or
even "next Thursday". A date string may contain items indicating cal‐
endar date, time of day, time zone, day of week, relative time, rela‐
tive date, and numbers. An empty string indicates the beginning of the
day. The date string format is more complex than is easily documented
here but is fully described in the info documentation.

Just be careful when you are trying to get the date from last month.

Live Browser Reload and Command Execution on File Change

Execute Command

When I am editting comments in my code, I would like to use Docco to generate a pretty print source code documentation and review it in web browser. However, every time I made a change, I had to issue docco command again, even I could use the up arrow key, but still a pain. Lucky, there is a way to eliminate this step. One way is to use grunt-contrib-watch, but the limitation is that this is not for an individual file on the command line, it is more for a build process. A better alternative is to use nodemon:

For use during development of a node.js based application. nodemon will watch the files in the directory that nodemon was started, and if they change, it will automatically restart your node application.

It does not have to be limited to Node and JavaScript files. We can use it with any command:

nodemon -x docco /path/to/app.coffee

With this command, docco will be executed upon any change to the file.

Reload Browser

Making live reload in browser is a little bit trickier, it involves using some Ruby gems.

TODO: Need to find a way to avoid using Ruby and its gems.

I have followed some of the steps from the post: Auto-refresh your browser when saving files or restarting node.js.

First install LiveReload Chrome extension, and then install Guard and Guard::LiveReload gems:

sudo gem install guard guard-livereload

You need to have a web server to make the livereload work, live reload does not work on files directly served from the file system:

file:///home/chao/docs/app.html

Therefore, an easy way is to set up a web server, such as Nginx, and configure a directory for this usage, such as:

/usr/share/nginx/www/livereload

Add the following Guardfile into the directory:

guard 'livereload' do
  watch(%r{.+\.(css|js|ejs|html)})
end

Launch Guard:

cd /usr/share/nginx/www/livereload && guard

Enable the live reload by clicking the menu icon. You should see the dot in the middle becomes solid.

Now you need to make sure set the generated files into the correct directory:

nodemon -x 'docco -o /usr/share/nginx/www/livereload' /path/to/app.coffee

Another way is to create a soft link to the directory, since the docs in the current execution directory is the default output directory:

ln -s /usr/share/nginx/www/livereload docs

Then, you can use nodemon as you normally do:

nodemon -x docco /path/to/app.coffee

Having live browser reload is a bit complicated. I wish there is a method as easy as using nodemon via a single command. But in the meantime, I just need to have two panes open and have both monitoring tools running independently. No more browser fresh and command re-issuing.

Keep It Simple and Small (KISS)

We are all aware of the KISS principle: Keep It Simple and Stupid. Let me introduce your a similar principle: Keep It Simple and Small.

Maciej Cegłowski who created Pinboard deliberately wants to avoid rapid growth.

“I have seen a lot of free services burn up all their development time scaling for users,” he said. He has no plan to cap the gate fee as more people sign on. “I am tempted to just let it go up and see what happens,” he says. - http://news.cnet.com/8301-19882_3-10310347-250.html

It gives you time to try out different features without upsetting many of your user base.

What if a little site you love doesn’t have a business model? Yell at the developers! Explain that you are tired of good projects folding and are willing to pay cash American dollar to prevent that from happening. It doesn’t take prohibitive per-user revenue to put a project in the black. It just requires a number greater than zero. - Don’t Be A Free User

Why there has to be an exit strategy? If you love what you are doing, and you want to do it for the rest of your life. Why do you need an exit strategy? For some people, it might be better to strike it alone, by keeping it small and having a business model.

Make a living without being beholden to a boss or investors. - Maciej Cegłowski

Preserve command history in Node REPL

Node REPL (Read-Eval-Print-Loop) is a standalone JavaScript interpreter for Node. To invoke it, simply type:

$ node

However, after exit and re-enter the REPL, all the previous commands are lost. This behaves very much different from what we expect from shell.

To maintain a persistent history, we can use rlwrap as suggested in this answer.

Install rlwrap:

$ sudo apt-get install rlwrap

Create an alias:

alias node='env NODE_NO_READLINE=1 rlwrap -s 1000 -S "node> " node'

Persistent history file is saved in:

~/.node_history

However, if by perserving the command history, Node REPL loses familiarity of color output and tab completion. This is a trade-off. To get around this problem, we can roll our own REPL or use an alternative. The best alternative I have found so far is also the one I am already using: CoffeeScript (see 1.6.3 change log). There is a history command built-in:

$ coffee
coffee> .help
.break  Sometimes you get stuck, this gets you out
.clear  Break, and also clear the local context
.exit   Exit the repl
.help   Show repl options
.history        Show command history
.load   Load JS from a file into the REPL session
.save   Save all evaluated commands in this REPL session to a file