aws

Amazon API Gateway No Integration Defined for Method

Encountered the following problem when deploying with the Serverless Framework (v1.27.3) to Amazon API Gateway:

1
2
3
4
CloudFormation - CREATE_IN_PROGRESS - AWS::ApiGateway::Deployment - ApiGatewayDeployment
CloudFormation - CREATE_FAILED - AWS::ApiGateway::Deployment - ApiGatewayDeployment
An error occurred: ApiGatewayDeployment - No integration defined for method

The problem was not the bug from the Serverless side, but was originated with manually created resource without integration defined for the POST method:

1
2
3
$ aws apigateway get-integration --rest-api-id xxxxxxxxxx --resource-id 0xxxxx --http-method post
An error occurred (NotFoundException) when calling the GetIntegration operation: No integration defined for method

After removing the resource. The deployment works again.

In conclusion, if there is a resource and one of its method has no integration, the REST API cannot be deployed. Either removing the resource or creating an integration will resolve the problem.

The following script will look for a REST API for no integration error:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/bin/sh
#
# Search Amazon API Gateway for resources that have no integration defined for
# method.
API_NAME=example.com
REST_API_ID=$(aws apigateway get-rest-apis --query='items[?name==`'$API_NAME'`].id | [0]' --output=text)
RESOURCES=$(aws apigateway get-resources --rest-api-id=$REST_API_ID --query='items[*].id' --output=text)
for resource in $(echo "$RESOURCES")
do
for method in $(aws apigateway get-resource \
--query='resourceMethods && keys(resourceMethods)' \
--output=text \
--rest-api-id=$REST_API_ID \
--resource-id=$resource)
do
if [ "$method" != "None" ]
then
aws apigateway get-integration \
--rest-api-id=$REST_API_ID \
--resource-id=$resource \
--http-method=$method > /dev/null
if [ $? -ne 0 ]
then
aws apigateway get-resource \
--rest-api-id=$REST_API_ID \
--resource-id=$resource
fi
fi
done
done

References:

Registering a Domain with Amazon Route 53 via CLI

Register a domain with Amazon Route 53 via CLI is very straightforward, the only complicate thing is to generate the acceptable JSON structure, then somehow enter the JSON string in the command line without meddling from the shell.

The command is:

1
$ aws route53domains register-domain

The documentation can be found from the AWS CLI Command Reference or issuing:

1
$ aws route53domains register-domain help

Most the command line options require simple string values, but some options like --admin-contact requires a structure data, which is complex to enter in the command line. The easiest way is to forget all other options and use --cli-input-json to get everything in JSON.

First let’s generate the skeleton or template to use:

1
$ aws route53domains register-domain --generate-cli-skeleton

That should output something like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
{
"DomainName": "",
"IdnLangCode": "",
"DurationInYears": 0,
"AutoRenew": true,
"AdminContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"RegistrantContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"TechContact": {
"FirstName": "",
"LastName": "",
"ContactType": "",
"OrganizationName": "",
"AddressLine1": "",
"AddressLine2": "",
"City": "",
"State": "",
"CountryCode": "",
"ZipCode": "",
"PhoneNumber": "",
"Email": "",
"Fax": "",
"ExtraParams": [
{
"Name": "",
"Value": ""
}
]
},
"PrivacyProtectAdminContact": true,
"PrivacyProtectRegistrantContact": true,
"PrivacyProtectTechContact": true
}

Fill in the fields, leave out the ones not needed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{
"DomainName": "example.com",
"DurationInYears": 1,
"AutoRenew": true,
"AdminContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"RegistrantContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"TechContact": {
"FirstName": "Joe",
"LastName": "Doe",
"ContactType": "PERSON",
"AddressLine1": "101 Main St",
"City": "New York",
"State": "NY",
"CountryCode": "US",
"ZipCode": "10001",
"PhoneNumber": "+1.8888888888",
"Email": "[email protected]"
},
"PrivacyProtectAdminContact": true,
"PrivacyProtectRegistrantContact": true,
"PrivacyProtectTechContact": true
}

Now let’s register by running through jq to clean up extra blanks and end of line characters, and then pipe to xargs command by using new line character as the delimiter instead of a blank by default:

1
2
3
4
5
$ cat input.json | jq -cM . | \
xargs -d '\n' aws route53domains register-domain --cli-input-json
{
"OperationId": "00000000-0000-0000-0000-00000000000"
}

Fixing Authorization Failure in AWS CLI by Synchronizing the Clock

Running into an error when executing an AWS command:

1
2
3
4
$ aws ec2 describe-instances
An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS
was not able to validate the provided access credentials

From the error message, it appears to be an error with access credentials. But after updating to a new credential, and even updated the AWS package, the error still persisted. After trying out other commands, there was an error message containing “signature not yet current” with timestamps. So, the actual problem was due to inaccurate local clock. Hence, the solution is to sync the local date and time by polling the Network Time Protocol (NTP) server:

1
$ sudo ntpdate pool.ntp.org

ntpdate can be run manually as necessary to set the host clock, or it can be run from the host startup script to set the clock at boot time. This is useful in some cases to set the clock initially before starting the NTP daemon ntpd. It is also possible to run ntpdate from a cron script. However, it is important to note that ntpdate with contrived cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing resource use. Finally, since ntpdate does not discipline the host clock frequency as does ntpd, the accuracy using ntpdate is limited.[^1]

From the description, we can learn that we can make things even easier by installing NTP package:

1
$ sudo apt-get install -y ntp

Network Time Protocol daemon and utility programs NTP, the Network Time Protocol, is used to keep computer clocks accurate by synchronizing them over the Internet or a local network, or by following an accurate hardware receiver that interprets GPS, DCF-77, NIST or similar time signals.[^2]

Verify the installation and execution:

1
2
$ ps -e | grep ntpd
4964 ? 00:00:00 ntpd

with the environment:

1
2
$ aws --version
aws-cli/1.10.53 Python/2.7.6 Linux/3.13.0-92-generic botocore/1.4.43

[^1]: $ man nptdate
[^2]: $ apt-cache show ntp

Use CLI s3api to Retrieve Metadata of Amazon S3 Object

In Amazon S3 CLI, there are only a handful commands:

1
cp ls mb mv rb rm sync website

We can use cp command to retrieve S3 object:

1
$ aws s3 cp s3://mybucket/myfile.json myfile.json

But if only the metadata of the object, such as ETag or Content-Type is needed, the S3 CLI does not have any command to do that.

Now enter S3API CLI. Not only the CLI commands can retrieve S3 objects, but also associated metadata. For example, retrieve S3 object similar to aws s3 cp:

1
$ aws s3api get-object --bucket mybucket --key myfile.json

Just need the metadata of the object, use head-object, which retrieves metadata without the object itself, as HTTP HEAD method:

Revoke and Grant Public IP Addresses to Amazon EC2 Instances Via AWS Command Line Interface (CLI)

If you work from place to place, such as from one coffee shop to another, and you need access to your Amazon EC2 instances, but you don’t want to allow traffics from all IP addresses. You can use the EC2 Security Groups to allow the IP addresses from those locations. But once you move on to a different location, you want to delete the IP address from the previous location. The process to do these manually and over and over again quickly becomes cumbersome. Here is a command line method that quickly removes all other locations and allows only the traffic from your current location.

The steps are:

  1. Revoke all existing sources to a particular port
  2. Grant access to the port only from the current IP address

Assume the following:

  • Profile: default
  • Security group: mygroup
  • Protocol: tcp
  • Port: 22

First, revoke access to the port from all IP addresses:

1
2
3
4
5
6
7
8
9
10
11
$ aws ec2 describe-security-groups \
--profile default \
--group-names mygroup \
--query 'SecurityGroups[0].IpPermissions[?ToPort==`22`].IpRanges[].CidrIp' | \
jq .[] | \
xargs -n 1 aws ec2 revoke-security-group-ingress \
--profile default \
--group-name mygroup \
--protocol tcp \
--port 22 \
--cidr

The aws ec2 describe-security-groups command before the first pipe returns JSON formatted data, filtered via JMESPath query, which is supported by AWS CLI, for example:

1
2
3
4
[
"XXX.XXX.XXX.XXX/32",
"XXX.XXX.XXX.XXX/32"
]

jq command simply converts an array of JSON to line by line strings, which xarg takes in, loops through and deletes one IP address at a time.

After this step, all IP addresses originally allowed are all revoked. Next step is to grant access to the port from a single IP address:

Delete All Messages in an Amazon SQS Queue via AWS CLI

Amazon SQS or Simple Queue Service is a fast, reliable, scalable, fully managed message queuing service. There is also AWS CLI or Command Line Interface available to use with the service.

If you have a lot of messages in a queue, this command will show the approximate number:

1
2
3
$ aws sqs get-queue-attributes \
--queue-url $url \
--attribute-names ApproximateNumberOfMessages

Where $url is the URL to the Amazon SQS queue.

There is no command to delete all messages yet, but you can chain a few commands together to make it work:

Use CNAME for Resovling Private and Public IP Address in Amazon EC2

“A security group acts as a virtual firewall that controlls the traffic for one or more instances.” 1 The Amazon EC2 Security Groups are not just capable controlling traffic from an IP address, but also from all EC2 instances belong to a specific security group. I want to allow an instance belonging to one security group to access an instance belongs to another security group via a custom domain name (subdomain.example.com).

But when I configured the subdomain via Amazon Route 53, I have misconfigured it by assigning a A record, an IP address or the Elastic IP address of the instance. I should have used CNAME, and assigned the public DNS (the public hostname of the EC2 instance).

Amazon S3 Delimiter and Prefix

Amazon S3 is an inexpensive online file storage service, and there is the JavaScript SDK to use. There are things puzzling me when using the SDK were:

  1. How to use parameters Delimiter and Prefix?
  2. What is the difference between CommonPrefixes and Contents?
  3. How to create a folder/directory with JavaScript SDK?

To retrieve objects in an Amazon S3 bucket, the operation is listObjects. The listObjects does not return the content of the object, but the key and meta data such as size and owner of the object.

To make a call to get a list of objects in a bucket:

1
2
3
s3.listObjects(params, function (err, data) {
// ...
});

Where the params can be configured with the following parameters:

  • Bucket
  • Delimiter
  • EncodingType
  • Marker
  • MaxKeys
  • Prefix

But what are Delimiter and Prefix? And how to use them?

Let’s start by creating some objects in an Amazon S3 bucket similar to the following file structure. This can be easily done by using the AWS Console.

1
2
3
4
5
6
7
8
.
├── directory
│ ├── directory
│ │ └── file
│ └── file
└── file
2 directories, 3 files

In Amazon S3, the objects are:

1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file

One thing to keep in mind is that Amazon S3 is not a file system. There is not really the concept of file and directory/folder. From the console, it might look like there are 2 directories and 3 files. But they are all objects. And objects are listed alphabetically by their keys.

To make it a little bit more clear, let’s invoke the listObjects method. Since the operation has only Bucket parameter is required:

1
2
3
params = {
Bucket: 'example'
};

The response data contains in the callback function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{ Contents:
[ { Key: 'directory/',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/directory/',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/directory/file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' } ],
CommonPrefixes: [],
Name: 'example',
Prefix: '',
Marker: '',
MaxKeys: 1000,
IsTruncated: false }

If this is a file structure, you might expect:

1
2
directory/
file

But it is not, because a bucket does not work like a folder or a directory, where the immediate files inside the directory is shown. The objects inside the bucket are laid out flat and alphabetically.

In UNIX, a directory is a file, but in Amazon S3, everything is an object, and can be identified by key.

So, how to make Amazon S3 behave more like a folder or a directory? Or how to just list the content of first level right inside the bucket?

In order to make it work like directory you have to use Delimiter and Prefix. Delimiter is what you use to group keys. It does have to be a single character, it can be a string of characters. And Prefix limits the response to keys that begin with the specified prefix.

Delimiter

Let’s start by adding the following delimiter:

1
2
3
4
params = {
Bucket: 'example',
Delimiter: '/'
};

You will get something more like a listing of a directory:

1
2
3
4
5
6
7
8
{ Contents:
[ { Key: 'file' } ],
CommonPrefixes: [ { Prefix: 'directory/' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: '/',
IsTruncated: false }

There are a directory directory/ and a file file. What happened was that the following objects except file are grouped by the delimiter /:

1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file

So, result in:

1
2
directory/
file

This feels more like a listing of a directory or folder. But if we change Delimiter to i, then, you get no Contents and just the prefixes:

1
2
3
4
5
6
7
{ Contents: [],
CommonPrefixes: [ { Prefix: 'di' }, { Prefix: 'fi' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: 'i',
IsTruncated: false }

All keys can be grouped into two prefixes: di and fi. Therefore, Amazon S3 is not a file system, but might act like one if using the right parameters.

As I have mentioned that Delimiter does not need to be a single character:

1
2
3
4
5
6
7
8
9
10
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/file' },
{ Key: 'file' } ],
CommonPrefixes: [ { Prefix: 'directory/directory' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: '/directory',
IsTruncated: false }

If recall the bucket structure:

1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file

Both directory/directory/ and directory/directory/file are grouped into a common prefix: directory/directory, due to the common grouping string /directory.

Let’s try another one with Delimiter: 'directory':

1
2
3
4
5
6
7
8
{ Contents:
[ { Key: 'file' } ],
CommonPrefixes: [ { Prefix: 'directory' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: 'directory',
IsTruncated: false }

Okay, one more. Let’s try ry/fi:

1
2
3
4
5
6
7
8
9
10
11
12
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'file' } ],
CommonPrefixes:
[ { Prefix: 'directory/directory/fi' },
{ Prefix: 'directory/fi' } ],
Name: 'example,
Prefix: '',
MaxKeys: 1000,
Delimiter: 'ry/fi',
IsTruncated: false }

So, remember that Delimiter is just providing a grouping functionality for keys. If you want it to behave like a file system, use Delimiter: '/'.

Prefix

Prefix is much easier to understand, it is a filter that limits keys to be prefixed by the one specified.

With the same structure:

1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file

Let’s set the Prefix parameter to directory:

1
2
3
4
5
6
7
8
9
10
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'directory/directory/file' },
{ Key: 'directory/file' } ],
CommonPrefixes: [],
Name: 'example',
Prefix: 'directory',
MaxKeys: 1000,
IsTruncated: false }

How about directory/:

1
2
3
4
5
6
7
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'directory/directory/file' },
{ Key: 'directory/file' } ],
CommonPrefixes: [],
Prefix: 'directory/' }

Both directory and directory/ prefixes are the same.

If we try something slightly different, Prefix: 'directory/d':

1
2
3
4
5
{ Contents:
[ { Key: 'directory/directory/' },
{ Key: 'directory/directory/file' } ],
CommonPrefixes: [],
Prefix: 'directory/d' }

Putting all together with both Delimiter: 'directory' and Prefix: 'directory':

1
2
3
4
5
6
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/file' } ],
CommonPrefixes: [ { Prefix: 'directory/directory' } ],
Prefix: 'directory',
Delimiter: 'directory' }

First, list the keys prefixed by directory:

1
2
3
4
directory/
directory/directory/
directory/directory/file
directory/file

Group them by the delimiter directory with prefix directory:

1
directory/directory

The result Contents are:

1
2
directory/
directory/file

and CommonPrefixes are:

1
directory/directory

Maybe changing Delimiter to i could give a better perspective:

1
2
3
4
5
{ Contents:
[ { Key: 'directory/' } ],
CommonPrefixes: [ { Prefix: 'directory/di' }, { Prefix: 'directory/fi' } ],
Prefix: 'directory',
Delimiter: 'i' }

as:

1
2
3
4
5
directory/ # key to show
directory/directory/ # group to 'directory/di'
directory/directory/file # group to 'directory/di'
directory/file # Group to 'directory/fi'
file # ignored due to prefix

One advantage of using Amazon S3 over listing a directory is that you don’t need to concern about nested directories, everything is being flattened. So, you can loop it through just by specifying the Prefix property.

Directory/Folder

If you’re using the Amazon AWS console to “Create Folder”, you can create a directory/folder and upload a file to inside the directory/folder. In effect, you are actually creating two objects with the following keys:

1
2
directory/
directory/file

If you use the following command to upload a file, the directory is not created:

1
$ aws s3 cp file s3://example/directory/file

Because, Amazon S3 is not a file system, but a key/value store. If you use listObjects method, you will just see one object. That is the same reason that you cannot copy a local directory:

1
2
$ aws s3 cp directory s3://example/directory
upload failed: aws/ to s3://example/directory [Errno 21] Is a directory: u'/home/chao/tmp/directory/'

But we can use the JavaScript SDK to create a directory/folder:

1
2
3
4
5
s3.putObject({ Bucket: 'example', Key: 'directory/' }, function (err, data) {
if (err) { return console.error(err); }
console.log(data);
});

Note that you must use directory/ with trailing slash instead the one without. Otherwise, it is just a regular file not a directory.

Install SSH Public Key to All AWS EC2 Instances

I’ve got a new laptop, and I need to install the SSH public key of the new machine to all my AWS EC2 instances in order to enable keyless access. I can use ssh-copy-id to install the public key one instance at a time, but I can also do it all at once:

$ aws ec2 describe-instances --output text \
  --query 'Reservations[*].Instances[*].{IP:PublicIpAddress}' \
  while read host; do \
  ssh-copy-id -i /path/to/key.pub $USER@$host; done

Somehow if using PublicIpAddress, some IP addresses in the response were cluttered in a single line. So, I use {IP:PublicIpAddress} instead.

For non-standard port:

$ ssh-copy-id -i /path/to/key.pub "$USER@$host -p $port"

The only problem is that it might install a duplicate key in ~/.ssh/authorized_keys file of the remote instance, if the key has already been installed. One way to solve this problem is to test the login from the new machine and generate only the IP addresses that the new machine does not have access to:

$ aws ec2 describe-instances --output text \
  --query 'Reservations[*].Instances[*].{IP:PublicIpAddress}' \
  | while read host; do ssh -q $USER@$host exit \
  || echo $host; done > instances.txt

Now back to the old machine, install the public key at once:

$ cat instances.txt | while read host; \
  do ssh-copy-id -i /path/to/key.pub $USER@$host; done

Amazon AWS Command Line Interface (CLI)

This is a brief guide for installing AWS Command Line Interface (CLI) on Ubuntu Linux.

The AWS Command Line Interface (CLI)] is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. [2]

The point here is unified, one tool to run all Amazon AWS services.

Install

The installation procedure applies to Ubuntu Linux with Zsh and Bash.

Install pip, a Python package manager:

$ sudo apt-get install python-pip

Install awscli:

$ sudo pip install awscli

Install autocompletion:

$ which aws_zsh_completer.sh
/usr/local/bin/aws_zsh_completer.sh
$ source aws_zsh_completer.sh

Add this line to ~/.zshrc as well.

Or for Bash ~/.bashrc:

$ echo '\n# Enable AWS CLI autocompletion' >> ~/.bashrc
$ echo 'complete -C aws_completer aws' >> ~/.bashrc
$ source ~/.bashrc

Test installation:

$ which aws
/usr/local/bin/aws
$ aws help

Test autocompletion:

$ aws <tab><tab>

You should see a list of all available AWS commands.

Usage

Before using aws-cli, you need to tell it about your AWS credentials. There are three ways to specify AWS credentials:

  1. Environment variables
  2. Config file
  3. IAM Role

Using config file is preferred, which is a simple ini file format to be stored in ~/.aws/config. A soft link can be used to link it or just tell awscli where to find it:

$ export AWS_CONFIG_FILE=/path/to/config_file

It is better to use IAM roles with any of the AWS services:

The final option for credentials is highly recommended if you are using aws-cli on an EC2 instance. IAM Roles are a great way to have credentials installed automatically on your instance. If you are using IAM Roles, aws-cli will find them and use them automatically. [4]

The default output is in JSON format. Other formats are tab-delimited text and ASCII-formatted table. For example, using --query filter and table output:

$ aws ec2 describe-instances --query 'Reservations[*].Instances[*].{ \
  ID:InstanceId, TYPE:InstanceType, ZONE:Placement.AvailabilityZone, \
  SECURITY:SecurityGroups[0].GroupId, KEY:KeyName, VPC:VpcId, \
  STATE:State.Name}' --output table

This will print a nice looking table of all EC2 instances.

The command line options also accept JSON format. But when passing in large blocks of data, referring a JSON file is much easier. Both local file and remote URL can be used.

Upgrade

Check the installed and the latest versions:

$ pip search awscli

Upgrade AWS CLI to the latest version:

$ sudo pip install --upgrade awscli

References

  1. AWS Command Line Interface
  2. User Guide
  3. Reference
  4. GitHub repository