Working with Big Number in JavaScript

JavaScript is only capable of handling 53-bit numbers, if you are working with a big number such as Twitter ID, which is using 64-bit number, then you need to find an external library to do that, otherwise, there will be precision lost:

> num = 420938523475451904
420938523475451900
> num = 420938523475451904 + 1
420938523475451900
> num = 420938523475451904 - 1
420938523475451900

Here is one library to use in Node environment, install Big.js:

$ npm install big.js

Load the module:

> BigNum = require('big.js')
{ [Function: Big] DP: 20, RM: 1 }

Use string to create the big number:

> num = BigNum('420938523475451904')
{ s: 1,
  e: 17,
  c: [ 4, 2, 0, 9, 3, 8, 5, 2, 3, 4, 7, 5, 4, 5, 1, 9, 0, 4 ] }
> num.toString()
'420938523475451904'

Perform addition:

> num.plus(1).toString()
'420938523475451905'

Perform substraction:

> num.minus(1).toString()
'420938523475451903'

There are other packages that yet to be tested:

Passage of Time 2013

  • 28 blog posts (next: weekly to every 3 days)
  • 89 WeChat moments

Snapshot

  • 29 blog posts
  • 204 Twitter tweets
  • 345 Twitter following
  • 76 Twitter followers
  • 8 GitHub followers
  • 334 GitHub starred
  • 27 GitHub following
  • 10 GitHub source repositories
  • 5 GitHub forked repositories
  • 13 Hacker News karma

Obtain Total Number of Notes in Google Keep

Google Keep is a quick note taking service, having a lot less features than others like Evernote. Therefore, you cannot even get the total number of notes. But you can quickly fire up a browser console, and type in the following script to get the note count:

document.getElementsByClassName('IZ65Hb-n0tgWb').length

No jQuery needed, and problem solved.

Split JSON File into Multiple Parts

mongoexport allows us to export documents from MongoDB into a JSON file:

$ mongoexport -d mydb -c mycollection -o myfile.json --jsonArray

The --jsonArray option writes the entire content of the export as a single JSON array. Sometimes, the size of individual file is quite large, we need to break it down into small pieces, because most web servers will have a limit on how much data can be submitted at once. However, with the option, the entire document is a single line. Any line processing command cannot be easily used. (If you are looking to do so, you can use jq to split a large JSON file into smaller pieces.) We can omit the option (default behavior) and have the export utility to dump it one document at a time. The entire exported JSON file is technically not in correct JSON format, but each line, which represents a MongoDB document, is valid JSON, and can be used to do some command line processing.

To break a large file into many smaller pieces, we can use split command:

$ split -l 10 data.json

The -l or --lines option limits each file with a maximum of 10 lines.

Another way we can use -C or --line-bytes option to put at most 1k bytes of lines per output file:

$ split -d -a 3 -C 1k data.json

One thing needs to make sure is that the size of each line is no more than the maximum size specified by the option, otherwise, partial lines will be generated.

It is good to keep all those parts in their own directory:

$ mkdir pieces && cd pieces && split -d -a 3 -C 1k ../data.json && cd ..

Unless breaking the file into one line at a time, otherwise, we need to convert individual JSON file into correct JSON format:

$ find pieces/* -exec sh -c \
> "awk 'BEGIN{l=\"[\"}{print l;l=\$0\",\"}END{print\$0\"\n]\"}' \
> {} > {}.json && rm {}" \;

The output JSON file will contains an array of MongoDB documents. The main idea of the AWK script is to print out previous line as it reads the current line. BEGIN { l = "[" } defines the first line as the opening square bracket, and the END { print $0"\n" } prints the last line of the file and the closing square bracket.

There bounds to be a better way. Just need to keep looking.

Set Filename Path with process.cwd()

When working with file system in Node, you will need a fully qualified filename to do things like reading the content of a file:

require('fs').readFile(filename, function (err, data) {});

If you have the following directory structure:

.
├── app.js
├── data
│   └── file.txt
└── lib
    └── reader.js

There are two ways to read the content of data/file.txt from lib/reader.js:

Use the current directory of the script __dirname:

filename = __dirname + '/../data/file.txt';

or with path.join to normalize the resulting path:

filename = path.join(__dirname, '../data/file.txt');

The second method is to use the current working directory of the process process.cwd():

filename = process.cwd() + '/data/file.txt';

The second approach is more portable. If you move lib/reader.js to lib/utils/reader.js, no code change will be needed. But make sure the directory of the Node process is the application root directory where node app.js is being issued.

Small Markdown Features on Ghost Blogging Platform

I have known about Ghost since its KickStarter campaign. And recently they just launched the hosted platform for Ghost. I like to set up and run it myself, but the hosted platform is 100% customized to run Ghost blogs. So it could be more efficient and reliable than tinkering on my own. While I was testing it, I have noticed two small Markdown features supported by Ghost:

Automatic links

You do not need to enclose the fully qualified URL in angular brackets, URL like http://ghost.org will be automatically linked up. But this works well without language like English where a space denotes as a separator. When I work with language like Chinese, this frequently creating a problem where the Chinese characters are treated as parts of the link. Therefore, I still prefer to write it with angular brackets and get into the habit in doing:

<http://ghost.org/>

Line divider

One feature I like so far is the support of a line divider. Just need three dashes in a line, it will be converted into a fancy divider. This is great for writing references and footnotes.

This is a quick blog for getting started with Ghost.


Exploration continues…

HTTP Methods Truth Table

My take on on HTTP methods and resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
+-----------------------------------------------------------+
| # | Request-URI | Method | RE | RNE |
+-----------------------------------------------------------+
| 0 | GET /resources | list | 200 | 200 |
| 1 | GET /resources/entity | load/insert | 200 | 404 |
|-----------------------------------------------------------|
| 2 | POST /resources | create | 201 | 409 |
| 3 | POST /resources/entity | N/A | N/A | N/A |
|-----------------------------------------------------------|
| 4 | PUT /resources | (batch) | 200 | 200 |
| 5 | PUT /resources/entity | replace/save | 204 | 201 |
|-----------------------------------------------------------|
| 6 | PATCH /resources | (batch) | 200 | 200 |
| 7 | PATCH /resources/entity | update | 204 | 404 |
|-----------------------------------------------------------|
| 8 | DELETE /resources | (batch) | 200 | 200 |
| 9 | DELETE /resources/entity | remove/delete | 204 | 404 |
+-----------------------------------------------------------+

Notes:

  1. RE: resource exists
  2. RNE: resource not exists
  3. For batch request, whether resource/entity exists or not, the resulting HTTP
    status code is always 200, because the code is used to indicate the status
    of the operation. The actual status code of each entity is enclosed in the
    response array. When there are no matching entities, the response is an empty
    array, therefore, status code 204 is not used.
  4. There are two situation, a new resource is being created, then the Location
    header must indicate the fully qualified resource URI.

Insert Text to the Beginning of a File

It is easy to append some text to the end of another file:

$ cat foo >> bar

or even just portion of a file:

$ head -n 2 foo >> bar

but how about to the beginning of a file? Well, it is not that bad to do either:

$ echo "$(cat foo bar)" > bar

just a portion of a file:

$ echo "$(head -n 2 foo)\n$(cat bar)" > bar

Now you can easily add some text such as copyright information to the beginning of another file with one command.

Amazon Route 53 via Command Line

Retrieve a list of hosted zones:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ aws route53 list-hosted-zones
{
"HostedZones": [
{
"ResourceRecordSetCount": 4,
"CallerReference": "12345678-ABCD-EFGH-IJKL-ABCDEFGHIJKL",
"Config": {},
"Id": "/hostedzone/1234567890ABC",
"Name": "realguess.net."
}
],
"IsTruncated": false,
"MaxItems": "100"
}

Get a single hosted zone with delegation set (four Route 53 name servers that were assigned to the hosted zone):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ aws route53 get-hosted-zone --id 1234567890ABC
{
"HostedZone": {
"ResourceRecordSetCount": 4,
"CallerReference": "12345678-ABCD-EFGH-IJKL-ABCDEFGHIJKL",
"Config": {},
"Id": "/hostedzone/1234567890ABC",
"Name": "realguess.net."
},
"DelegationSet": {
"NameServers": [
"ns-1727.awsdns-23.co.uk",
"ns-1312.awsdns-36.org",
"ns-402.awsdns-50.com",
"ns-587.awsdns-09.net"
]
}
}

List all resource record sets in a hosted zone:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
$ aws route53 list-resource-record-sets --hosted-zone-id 1234567890ABC
{
"IsTruncated": false,
"ResourceRecordSets": [
{
"ResourceRecords": [
{
"Value": "192.168.153.123"
}
],
"Type": "A",
"Name": "realguess.net.",
"TTL": 172800
},
{
"ResourceRecords": [
{
"Value": "ns-1727.awsdns-23.co.uk."
},
{
"Value": "ns-1312.awsdns-36.org."
},
{
"Value": "ns-402.awsdns-50.com."
},
{
"Value": "ns-587.awsdns-09.net."
}
],
"Type": "NS",
"Name": "realguess.net.",
"TTL": 172800
},
{
"ResourceRecords": [
{
"Value": "ns-1727.awsdns-23.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400"
}
],
"Type": "SOA",
"Name": "realguess.net.",
"TTL": 900
},
{
"ResourceRecords": [
{
"Value": "192.168.153.123"
}
],
"Type": "A",
"Name": "www.realguess.net.",
"TTL": 86400
}
],
"MaxItems": "100"
}

Retrieve a single record set:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ aws route53 list-resource-record-sets --hosted-zone-id 1234567890ABC \
--start-record-name www.realguess.net --start-record-type A --max-items 1
{
"IsTruncated": false,
"ResourceRecordSets": [
{
"ResourceRecords": [
{
"Value": "192.168.153.123"
}
],
"Type": "A",
"Name": "www.realguess.net.",
"TTL": 86400
}
],
"MaxItems": "1"
}

In order to create a new record set, first create a JSON file to describe the new record:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"Comment": "A new record set for the zone.",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "api.realguess.net.",
"Type": "CNAME",
"TTL": 60,
"ResourceRecords": [
{
"Value": "www.realguess.net"
}
]
}
}
]
}

Add a new record set (note that the JSON file must use file:// starting path):

1
2
3
4
5
6
7
8
9
10
$ aws route53 change-resource-record-sets --hosted-zone-id 1234567890ABC \
--change-batch file:///path/to/record.json
{
"ChangeInfo": {
"Status": "PENDING",
"Comment": "A new record set for the zone.",
"SubmittedAt": "2013-12-06T00:00:00.000Z",
"Id": "/change/CHANGEID123"
}
}

The status of adding the new record is currently pending. Poll the server to get the updated status:

1
2
3
4
5
6
7
8
9
$ aws route53 get-change --id CHANGEID123
{
"ChangeInfo": {
"Status": "INSYNC",
"Comment": "A new record set for the zone.",
"SubmittedAt": "2013-12-06T00:00:00.000Z",
"Id": "/change/CHANGEID123"
}
}

A new record has been created and has been propagated to all hosts.