An error occurred: ApiGatewayDeployment - No integration defined formethod
The problem was not the bug from the Serverless side, but was originated with manually created resource without integration defined for the POST method:
Anerroroccurred(NotFoundException)whencallingtheGetIntegrationoperation: No integration defined formethod
After removing the resource. The deployment works again.
In conclusion, if there is a resource and one of its method has no integration, the REST API cannot be deployed. Either removing the resource or creating an integration will resolve the problem.
The following script will look for a REST API for no integration error:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/bin/sh
#
# Search Amazon API Gateway for resources that have no integration defined for
Register a domain with Amazon Route 53 via CLI is very straightforward, the only complicate thing is to generate the acceptable JSON structure, then somehow enter the JSON string in the command line without meddling from the shell.
Most the command line options require simple string values, but some options like --admin-contact requires a structure data, which is complex to enter in the command line. The easiest way is to forget all other options and use --cli-input-json to get everything in JSON.
First let’s generate the skeleton or template to use:
Now let’s register by running through jq to clean up extra blanks and end of line characters, and then pipe to xargs command by using new line character as the delimiter instead of a blank by default:
Running into an error when executing an AWS command:
1
2
3
4
$ aws ec2 describe-instances
An error occurred (AuthFailure) when calling the DescribeInstances operation: AWS
was not able to validate the provided access credentials
From the error message, it appears to be an error with access credentials. But after updating to a new credential, and even updated the AWS package, the error still persisted. After trying out other commands, there was an error message containing “signature not yet current” with timestamps. So, the actual problem was due to inaccurate local clock. Hence, the solution is to sync the local date and time by polling the Network Time Protocol (NTP) server:
1
$ sudo ntpdate pool.ntp.org
ntpdate can be run manually as necessary to set the host clock, or it can be run from the host startup script to set the clock at boot time. This is useful in some cases to set the clock initially before starting the NTP daemon ntpd. It is also possible to run ntpdate from a cron script. However, it is important to note that ntpdate with contrived cron scripts is no substitute for the NTP daemon, which uses sophisticated algorithms to maximize accuracy and reliability while minimizing resource use. Finally, since ntpdate does not discipline the host clock frequency as does ntpd, the accuracy using ntpdate is limited.[^1]
From the description, we can learn that we can make things even easier by installing NTP package:
1
$ sudo apt-get install -y ntp
Network Time Protocol daemon and utility programs NTP, the Network Time Protocol, is used to keep computer clocks accurate by synchronizing them over the Internet or a local network, or by following an accurate hardware receiver that interprets GPS, DCF-77, NIST or similar time signals.[^2]
In Amazon S3 CLI, there are only a handful commands:
1
cp ls mb mv rb rm sync website
We can use cp command to retrieve S3 object:
1
$ aws s3 cp s3://mybucket/myfile.json myfile.json
But if only the metadata of the object, such as ETag or Content-Type is needed, the S3 CLI does not have any command to do that.
Now enter S3API CLI. Not only the CLI commands can retrieve S3 objects, but also associated metadata. For example, retrieve S3 object similar to aws s3 cp:
If you work from place to place, such as from one coffee shop to another, and you need access to your Amazon EC2 instances, but you don’t want to allow traffics from all IP addresses. You can use the EC2 Security Groups to allow the IP addresses from those locations. But once you move on to a different location, you want to delete the IP address from the previous location. The process to do these manually and over and over again quickly becomes cumbersome. Here is a command line method that quickly removes all other locations and allows only the traffic from your current location.
The steps are:
Revoke all existing sources to a particular port
Grant access to the port only from the current IP address
Assume the following:
Profile: default
Security group: mygroup
Protocol: tcp
Port: 22
First, revoke access to the port from all IP addresses:
The aws ec2 describe-security-groups command before the first pipe returns JSON formatted data, filtered via JMESPath query, which is supported by AWS CLI, for example:
1
2
3
4
[
"XXX.XXX.XXX.XXX/32",
"XXX.XXX.XXX.XXX/32"
]
jq command simply converts an array of JSON to line by line strings, which xarg takes in, loops through and deletes one IP address at a time.
After this step, all IP addresses originally allowed are all revoked. Next step is to grant access to the port from a single IP address:
Amazon SQS or Simple Queue Service is a fast, reliable, scalable, fully managed message queuing service. There is also AWS CLI or Command Line Interface available to use with the service.
If you have a lot of messages in a queue, this command will show the approximate number:
1
2
3
$ aws sqs get-queue-attributes \
--queue-url $url \
--attribute-names ApproximateNumberOfMessages
Where $url is the URL to the Amazon SQS queue.
There is no command to delete all messages yet, but you can chain a few commands together to make it work:
“A security group acts as a virtual firewall that controlls the traffic for one or more instances.” 1 The Amazon EC2Security Groups are not just capable controlling traffic from an IP address, but also from all EC2 instances belong to a specific security group. I want to allow an instance belonging to one security group to access an instance belongs to another security group via a custom domain name (subdomain.example.com).
But when I configured the subdomain via Amazon Route 53, I have misconfigured it by assigning a A record, an IP address or the Elastic IP address of the instance. I should have used CNAME, and assigned the public DNS (the public hostname of the EC2 instance).
Amazon S3 is an inexpensive online file storage service, and there is the JavaScript SDK to use. There are things puzzling me when using the SDK were:
How to use parameters Delimiter and Prefix?
What is the difference between CommonPrefixes and Contents?
How to create a folder/directory with JavaScript SDK?
To retrieve objects in an Amazon S3 bucket, the operation is listObjects. The listObjects does not return the content of the object, but the key and meta data such as size and owner of the object.
To make a call to get a list of objects in a bucket:
1
2
3
s3.listObjects(params, function(err, data){
// ...
});
Where the params can be configured with the following parameters:
Bucket
Delimiter
EncodingType
Marker
MaxKeys
Prefix
But what are Delimiter and Prefix? And how to use them?
Let’s start by creating some objects in an Amazon S3 bucket similar to the following file structure. This can be easily done by using the AWS Console.
1
2
3
4
5
6
7
8
.
├── directory
│ ├── directory
│ │ └── file
│ └── file
└── file
2directories, 3files
In Amazon S3, the objects are:
1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file
One thing to keep in mind is that Amazon S3 is not a file system. There is not really the concept of file and directory/folder. From the console, it might look like there are 2 directories and 3 files. But they are all objects. And objects are listed alphabetically by their keys.
To make it a little bit more clear, let’s invoke the listObjects method. Since the operation has only Bucket parameter is required:
1
2
3
params = {
Bucket: 'example'
};
The response data contains in the callback function:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{ Contents:
[ { Key: 'directory/',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/directory/',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/directory/file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'directory/file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' },
{ Key: 'file',
LastModified: ...,
ETag: '"d41d8cd98f00b204e9800998ecf8427e"',
Size: 0,
Owner: [Object],
StorageClass: 'STANDARD' } ],
CommonPrefixes: [],
Name: 'example',
Prefix: '',
Marker: '',
MaxKeys: 1000,
IsTruncated: false }
If this is a file structure, you might expect:
1
2
directory/
file
But it is not, because a bucket does not work like a folder or a directory, where the immediate files inside the directory is shown. The objects inside the bucket are laid out flat and alphabetically.
In UNIX, a directory is a file, but in Amazon S3, everything is an object, and can be identified by key.
So, how to make Amazon S3 behave more like a folder or a directory? Or how to just list the content of first level right inside the bucket?
In order to make it work like directory you have to use Delimiter and Prefix. Delimiter is what you use to group keys. It does have to be a single character, it can be a string of characters. And Prefix limits the response to keys that begin with the specified prefix.
Delimiter
Let’s start by adding the following delimiter:
1
2
3
4
params = {
Bucket: 'example',
Delimiter: '/'
};
You will get something more like a listing of a directory:
1
2
3
4
5
6
7
8
{ Contents:
[ { Key: 'file' } ],
CommonPrefixes: [ { Prefix: 'directory/' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: '/',
IsTruncated: false }
There are a directory directory/ and a file file. What happened was that the following objects except file are grouped by the delimiter /:
1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file
So, result in:
1
2
directory/
file
This feels more like a listing of a directory or folder. But if we change Delimiter to i, then, you get no Contents and just the prefixes:
Both directory/directory/ and directory/directory/file are grouped into a common prefix: directory/directory, due to the common grouping string /directory.
Let’s try another one with Delimiter: 'directory':
1
2
3
4
5
6
7
8
{ Contents:
[ { Key: 'file' } ],
CommonPrefixes: [ { Prefix: 'directory' } ],
Name: 'example',
Prefix: '',
MaxKeys: 1000,
Delimiter: 'directory',
IsTruncated: false }
Okay, one more. Let’s try ry/fi:
1
2
3
4
5
6
7
8
9
10
11
12
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'file' } ],
CommonPrefixes:
[ { Prefix: 'directory/directory/fi' },
{ Prefix: 'directory/fi' } ],
Name: 'example,
Prefix: '',
MaxKeys: 1000,
Delimiter: 'ry/fi',
IsTruncated: false }
So, remember that Delimiter is just providing a grouping functionality for keys. If you want it to behave like a file system, use Delimiter: '/'.
Prefix
Prefix is much easier to understand, it is a filter that limits keys to be prefixed by the one specified.
With the same structure:
1
2
3
4
5
directory/
directory/directory/
directory/directory/file
directory/file
file
Let’s set the Prefix parameter to directory:
1
2
3
4
5
6
7
8
9
10
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'directory/directory/file' },
{ Key: 'directory/file' } ],
CommonPrefixes: [],
Name: 'example',
Prefix: 'directory',
MaxKeys: 1000,
IsTruncated: false }
How about directory/:
1
2
3
4
5
6
7
{ Contents:
[ { Key: 'directory/' },
{ Key: 'directory/directory/' },
{ Key: 'directory/directory/file' },
{ Key: 'directory/file' } ],
CommonPrefixes: [],
Prefix: 'directory/' }
Both directory and directory/ prefixes are the same.
If we try something slightly different, Prefix: 'directory/d':
1
2
3
4
5
{ Contents:
[ { Key: 'directory/directory/' },
{ Key: 'directory/directory/file' } ],
CommonPrefixes: [],
Prefix: 'directory/d' }
Putting all together with both Delimiter: 'directory' and Prefix: 'directory':
One advantage of using Amazon S3 over listing a directory is that you don’t need to concern about nested directories, everything is being flattened. So, you can loop it through just by specifying the Prefix property.
Directory/Folder
If you’re using the Amazon AWS console to “Create Folder”, you can create a directory/folder and upload a file to inside the directory/folder. In effect, you are actually creating two objects with the following keys:
1
2
directory/
directory/file
If you use the following command to upload a file, the directory is not created:
1
$ aws s3 cp file s3://example/directory/file
Because, Amazon S3 is not a file system, but a key/value store. If you use listObjects method, you will just see one object. That is the same reason that you cannot copy a local directory:
1
2
$ aws s3 cp directory s3://example/directory
upload failed: aws/ to s3://example/directory [Errno 21] Is a directory: u'/home/chao/tmp/directory/'
But we can use the JavaScript SDK to create a directory/folder:
I’ve got a new laptop, and I need to install the SSH public key of the new machine to all my AWS EC2 instances in order to enable keyless access. I can use ssh-copy-id to install the public key one instance at a time, but I can also do it all at once:
$ aws ec2 describe-instances --output text \--query 'Reservations[*].Instances[*].{IP:PublicIpAddress}' \
while read host;do \
ssh-copy-id -i /path/to/key.pub $USER@$host; done
Somehow if using PublicIpAddress, some IP addresses in the response were cluttered in a single line. So, I use {IP:PublicIpAddress} instead.
The only problem is that it might install a duplicate key in ~/.ssh/authorized_keys file of the remote instance, if the key has already been installed. One way to solve this problem is to test the login from the new machine and generate only the IP addresses that the new machine does not have access to:
The AWS Command Line Interface (CLI)] is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. [2]
The point here is unified, one tool to run all Amazon AWS services.
Install
The installation procedure applies to Ubuntu Linux with Zsh and Bash.
You should see a list of all available AWS commands.
Usage
Before using aws-cli, you need to tell it about your AWS credentials. There are three ways to specify AWS credentials:
Environment variables
Config file
IAM Role
Using config file is preferred, which is a simple ini file format to be stored in ~/.aws/config. A soft link can be used to link it or just tell awscli where to find it:
$ export AWS_CONFIG_FILE=/path/to/config_file
It is better to use IAM roles with any of the AWS services:
The final option for credentials is highly recommended if you are using aws-cli on an EC2 instance. IAM Roles are a great way to have credentials installed automatically on your instance. If you are using IAM Roles, aws-cli will find them and use them automatically. [4]
The default output is in JSON format. Other formats are tab-delimited text and ASCII-formatted table. For example, using --query filter and table output:
This will print a nice looking table of all EC2 instances.
The command line options also accept JSON format. But when passing in large blocks of data, referring a JSON file is much easier. Both local file and remote URL can be used.