This project riffs off of the Dynamo DB Golang samples and the Serverless Framework Go example to create an example of how to build a simple API Gateway -> Lambda -> DynamoDB set of methods.
Note that, instead of using the create_table.go
to set up the initial table like the AWS code example does, the resource building mechanism that Serverless provides is used. Individual code is organized as follows:
- functions/post.go - POST method for creating a new item
- functions/get.go - GET method for reading a specific item
- functions/delete.go - DELETE method for deleting a specific item
- functions/put.go - PUT method for updating an existing item
- data/XXX.json - Set of sample data files for POST and PUT actions
- Makefile - Used for dep package management and compiles of individual functions
- serverless.yml - Defines the initial table, function defs, and API Gateway events
If you are a Serverless Framework rookie, follow the installation instructions here. If you are a grizzled vet, be sure that you have v1.26 or later as that's the version that introduces Golang support. You'll also need to install Go and it's dependency manager, dep.
When both of those tasks are done, cd into your GOPATH
(more than likely ~/go/src/) and clone this project into that folder. Then cd into the resulting snaplock
folder and compile the source with make
:
$ make
dep ensure
env GOOS=linux go build -ldflags="-s -w" -o bin/get functions/get.go
env GOOS=linux go build -ldflags="-s -w" -o bin/post functions/post.go
env GOOS=linux go build -ldflags="-s -w" -o bin/delete functions/delete.go
env GOOS=linux go build -ldflags="-s -w" -o bin/put functions/put.go
env GOOS=linux go build -ldflags="-s -w" -o bin/list-by-year functions/list-by-year.go
What is this makefile doing? First, it runs the dep ensure
command, which will scan your underlying .go files looking for dependencies to install, which it will grab off of Github as needed and place under a newly created vendor
folder under snaplock
. Then, it'll compile the individual function files, placing the resulting binaries in the bin
folder.
If you look at the serverless.yml
file, it makes references to those recently compiled function binaries, one for each function in our service and each one corresponding to a different verb/path in our API we're creating. Deploy the entire service with the 'sls' command:
$ sls deploy
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless: Creating Stack...
Serverless: Checking Stack create progress...
.....
Serverless: Stack create finished...
Serverless: Uploading CloudFormation file to S3...
Serverless: Uploading artifacts...
Serverless: Uploading service .zip file to S3 (15.19 MB)...
Serverless: Validating template...
Serverless: Updating Stack...
Serverless: Checking Stack update progress...
...................................................................................................
Serverless: Stack update finished...
Service Information
service: snaplock
stage: dev
region: us-east-1
stack: snaplock-dev
api keys:
None
endpoints:
GET - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}
GET - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}/{title}
POST - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/staging
DELETE - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies/{year}/{title}
PUT - https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies
functions:
list: snaplock-dev-list
get: snaplock-dev-get
post: snaplock-dev-post
delete: snaplock-dev-delete
put: snaplock-dev-put
When done, you can find the new DynamoDB table in the AWS Console, which should initially look like this:
and your <base URL>
will be of the format 'https://XXXXXXXXXX.execute-api.us-east-1.amazonaws.com/dev/movies' where XXXXXXXXXX
will be some random string generated by AWS.
The development cycle would then be:
- Make changes to the .go files
- Run
make
to compile the binaries - Run
sls deploy
to construct the service and push changes to lambda - Use
curl
to then interrogate the API as described below
Once deployed and substituting your <base URL>
the following CURL commands can be used to interact with the resulting API, whose results can be confirmed in the DynamoDB console
curl -X POST https:<base URL> -d @data/post1.json
Which should result in the DynamoDB table looking like this:
Rinse/repeat for other data files to yeild:
Using the year and title (replacing spaces wiht '-' or '+'), you can now obtain an item as follows (prettified output):
curl https://<base URL>/2013/Hunger-Games-Catching-Fire
{
"year": 2013,
"title": "Hunger Games Catching Fire",
"info": {
"plot": "Katniss Everdeen and Peeta Mellark become targets of the Capitol after their victory in the 74th Hunger Games sparks a rebellion in the Districts of Panem.",
"rating": 7.6
}
}
You can list items by year as follows (prettified output):
curl https://<base URL>/2013
[
{
"year": 2013,
"title": "Hunger Games Catching Fire",
"info": {
"plot": "",
"rating": 0
}
},
{
"year": 2013,
"title": "Turn It Down Or Else",
"info": {
"plot": "",
"rating": 0
}
}
]
Using the same year and title specifiers, you can delete as follows:
curl -X DELETE https://<base URL>/2013/Hunger-Games-Catching-Fire
Which should result in the DynamoDB table looking like this:
You can update as follows:
curl -X PUT https:<base URL> -d @data/put3.json
Which should result in the DynamoDB table looking like this: