/boa322

Primary LanguagePythonMIT No AttributionMIT-0

Deploy ML-powered music genre classifier using AWS Lambda, Amazon EFS and Amazon S3

Repository for deploying multiple machine learning models for inference on AWS Lambda and Amazon EFS

Introduction

In this repo, you will find all the code needed to deploy your application for Machine Learning Inference using AWS Lambda and Amazon EFS.

Application Workflow

Here is the architectural work flow of our application:

  • Create a serverless application which will trigger a Lambda function upon a new model upload in your S3 bucket. And the function would copy that file from your S3 bucket to EFS File System

  • Create another Lambda function that will load the model from Amazon EFS and performs the prediction based on an image.

  • Build and deploy both the application using AWS Serverless Application Model (AWS SAM) application.

Architecture

To use the Amazon EFS file system from Lambda, you need the following:

  • An Amazon Virtual Private Cloud (Amazon VPC)
  • An Amazon EFS file system created within that VPC with an access point as an application entry point for your Lambda function.
  • A Lambda function (in the same VPC and private subnets) referencing the access point.

The following diagram illustrates the solution architecture:

Architecture Diagram

Create an Amazon EFS file system, access point, and Lambda function

Now, we are going to use a single SAM deployment to deploy this, which will create the following two serverless applications, let’s call it :

  • app1(s3-efs): The serverless application which will transfer the uploaded ML models from your S3 bucket to the your EFS file system
  • app2(ml-inference): The serverless application which will perform the ML Inference from the client.

Architecture Diagram

Steps to deploy

  1. Create a project directory:
$ mkdir my-ml-project
$ cd my-ml-project
  1. Create a new serverless application in AWS SAM using the following command:
$ sam init

Choose Custom Template Location (Choice: 2) as the template source, and provide the following GitHub template location:

https://github.com/debnsuma/boa322.git

  1. Build the AWS SAM application
$ sam build --use-container
  1. Deploy the application
$ sam deploy --guided
  1. Provide a unique Stack Name and SrcBucket name

Architecture Diagram

  1. Once the Application is deployed, keep a note of the API Gateway endpoint (we would need this at Step 8 for inference)

  2. Upload the ML model

$ aws s3 cp models.p s3://<THE BUCKET NAME YOU PROVIDED WHILE DEPLOYING THE SAM APPLICATION>
  1. Perform ML inference : Use POSTMAN or API GW UI for inference, use the following in the body
{
    "song_name": "blues.00099.6.wav" # Replace this with song of your choice
}

Architecture Diagram