This project contains all necessary components and services for the Video Action Recognizer application.
Implemented in Python, this component performs video analysis using TensorFlow and the Movinet kinetics-600 model. It operates as a serverless Fargate task within AWS ECS.
A Python AWS Lambda function that responds to S3 'object put' events by initiating the Analysis Core ECS task to process the uploaded video file.
In development, this component will provide RESTful APIs, facilitating server-side interactions and integrations with AWS services.
The user interface is built with TypeScript and React.js, allowing for video or GIF file uploads and presenting analysis results. (Under Development)
Infrastructure as Code (IaC) managed through Terraform scripts automates the setup of the required AWS infrastructure.
Ensure the AWS CLI is installed and configured with an access key pair before beginning the deployment process.
Navigate to the infrastructure/setup
directory and create a terraform.tfvars
file:
aws_region = "<AWS_REGION>"
terraform_state_bucket = "<TERRAFORM_STATE_BUCKET_NAME>"
lambda_bucket = "<LAMBDA_BUCKET_NAME>"
Initialize Terraform:
terraform init
Deploy the resources:
terraform plan -out setup.tfplan
terraform apply "setup.tfplan"
Navigate to the upload-listener
directory:
cd upload-listener
Package and deploy the Lambda function:
rm -rf ./package && rm -rf ./build
mkdir -p ./package && mkdir -p ./build
cp listener_lambda.py ./package/
python -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt -t ./package
cd ./package
zip -r9 ../build/upload_listener.zip .
cd ..
rm -rf ./package
deactivate
aws s3 cp build/upload_listener.zip \\
s3://<LAMBDA_BUCKET_NAME>/upload_listener/latest/function.zip
shasum -a 256 build/upload_listener.zip | awk '{print $1}' | xxd -r -p | base64
Otaining the SHA sum and use it in the Main Infrastructure section.
Navigate to the infrastructure
directory and create a backend configuration file backend_config.hcl
:
bucket = "<TERRAFORM_STATE_BUCKET_NAME>"
region = "<AWS_REGION>"
Create a terraform.tfvars
file:
aws_region = "<AWS_REGION>"
input_bucket = "<INPUT_BUCKET_NAME>"
output_bucket = "<OUTPUT_BUCKET_NAME>"
lambda_bucket = "<LAMBDA_BUCKET_NAME>"
upload_listener_lambda_bundle_sha = "<LAMBDA_BUNDLE_SHA>"
cognito_domain_prefix = "<AWS_COGNITO_DOMAIN_PREFIX>"
Initialize Terraform with the S3 backend:
terraform init -backend-config="backend_config.hcl"
Deploy the main infrastructure:
terraform plan -out main.tfplan
terraform apply "main.tfplan"
To view the state of deployed resources:
terraform state list
Build and push the Docker image:
aws ecr get-login-password --region <AWS_REGION> | \
docker login --username AWS --password-stdin \
<ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
docker build \
-t <ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/video-action-regognizer:<TAG> \
-t <ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/video-action-regognizer:latest \
.
docker push <ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/video-action-regognizer:<TAG>
docker push <ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/video-action-regognizer:latest
To update the Lambda function:
- Re-run the code to build and upload the listener lambda function.
- Replace the
upload_listener_lambda_bundle_sha
interraform.tfvars
with the newly obtained SHA sum. - Apply the changes using Terraform:
terraform plan -out main.tfplan
terraform apply "main.tfplan"