Simple example demonstrating a serverless, Rust-powered API implementation.
AWS provides a Rust runtime for creating Lambda functions. You can find an article about it here.
Using Lambda functions as request handlers allows for fast responses with virtually unlimited scalability. However, those handlers become the bottleneck when they're implemented using slow runtimes like Python. On the other hand, Rust-implemented handlers should (in theory) achieve nearly the fastest possible performance without losing the benefits of serverless.
This repository provides a minimal example of combining Rust with AWS Lambda to produce a very fast and scalable API. In this setup, AWS API Gateway proxies requests to the correct Lambda function, which then formulates and returns a response.
Please keep in mind that this is still a work-in-progress! Feel free to contribute.
First add the 86_64-unknown-linux-musl
target:
rustup target add x86_64-unknown-linux-musl && rustup update
Then install a linker for the target platform:
brew install filosottile/musl-cross/musl-cross
Lastly, create an entry in .cargo/config
telling Cargo to use this linker:
mkdir -p .cargo \
&& echo -e '[target.x86_64-unknown-linux-musl]\nlinker = "x86_64-linux-musl-gcc"' \
> .cargo/config \
&& echo -e '\n[build]\ntarget = "x86_64-unknown-linux-musl"' \
>> .cargo/config
Alternatively, Docker images exist which can provide an isolated environment to build the executables in.
Deploying the API is a multi-step process:
- Compile the executables
- Deploy the executables to AWS Lambda
- Bundle the OpenAPI specification into a single file
- Create the API Gateway specification and attach Lambda handlers
You can compile the executables with:
cargo build --release
The output will be in target/x86_64-unknown-linux-musl/release/
.
Start by installing the AWS CLI.
To deploy a single handler, it first needs to be bundled for deployment:
cp target/x86_64-unknown-linux-musl/release/get_hello ./bootstrap \
&& zip lambda.zip bootstrap \
&& rm bootstrap
To deploy it with the CLI, run:
aws lambda create-function \
--function-name getHello \
--runtime provided \
--role arn:aws:iam::{XXXXXXXXXXXX}:{role} \
--handler null \
--zip-file fileb://./lambda.zip
Replace {XXXXXXXXXXXX}
with the AWS account ID and {role}
with the name of the AWS role for the handler.
To create a single OpenAPI specification file, run:
npx @redocly/openapi-cli bundle --ext json openapi/openapi.json \
> openapi/_bundled.json
(This step isn't written yet. An API can be created with API Gateway by providing the bundled OpenAPI specification. Each API Gateway endpoint proxies requests to the Lambda handler, which then generates a response.)
Note: Docker will need to be installed to run the mock server locally.
Start by installing the AWS Serverless Application Model (SAM) CLI:
brew tap aws/tap
brew install awscli aws-sam-cli
Build the lambda executables and run the local API Gateway server:
sam build && sam local start-api
You should now be able to hit the API mounted at http://localhost:3000/hello
.
The API specification in openapi
adheres to the OpenAPI specification and can be linted with:
npx @redocly/openapi-cli lint openapi/openapi.json