Key | Value |
---|---|
Environment | |
Services | S3, SageMaker, Lambda |
Integrations | AWS SDK |
Categories | Serverless, S3 website, Lambda function URLs, SageMaker, Machine Learning, JavaScript, Python |
Level | Intermediate |
This is a sample application that demonstrates how to use SageMaker on LocalStack. A simple web frontend allows users to draw a digit and submit it to a locally running SageMaker endpoint. The endpoint returns a prediction of the digit, which is then displayed in the web frontend. Request handling is performed by a Lambda function, accessible via a function URL, that uses the SageMaker SDK to invoke the endpoint.
Here's a short summary of AWS service features we use:
- S3 website
- Lambda function URLs
- SageMaker endpoint
Here's the web application in action:
Screen.Recording.2023-04-27.at.16.04.43.mov
Create a virtualenv and install all the development dependencies there:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
If you'd like to perform training locally, you'll need to install the ml dev dependencies as well:
pip install -r ml/requirements.txt
You'll also need npm/node installed to build the web application. Please install according to official guidelines: https://github.com/nvm-sh/nvm
As our inference container, we use the PyTorch inference container from the AWS ECR.
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.eu-central-1.amazonaws.com
docker pull 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-inference:1.10.2-cpu-py38-ubuntu20.04-sagemaker
Start LocalStack Pro with your Auth Token:
PERSISTENCE=1 LOCALSTACK_AUTH_TOKEN=... localstack start
First, we install the dependencies for the Web application in the web
directory:
(cd web; npm install)
You can then create the AWS infrastructure on LocalStack by running the deploy/deploy_app.py
script (make sure to have the virtual environment activated):
source .venv/bin/activate
python deploy/deploy_app.py
This script will create the SageMaker endpoint with the model, which it first uploads to a bucket. The script will also create a lambda function that will be used to invoke the endpoint. Finally, the script will build the web application and then create a s3 website to host it.
Once deployed, visit http://mnist-website.s3-website.localhost.localstack.cloud:4566
Draw something in the canvas and click on the button that says Predict
.
After a few moments the resulting prediction should be displayed in the box to the right.
To switch to a serverless SageMaker endpoint you can also execute the deployment script with the additional -s
or --serverless
flag:
python deploy/deploy_app.py --serverless
The code of this sample application is published under the Apache 2.0 license (see LICENSE
).
We appreciate your interest in contributing to our project and are always looking for new ways to improve the developer experience. We welcome feedback, bug reports, and even feature ideas from the community. Please refer to the contributing file for more details on how to get started.