A facial recognition microservice built with AWS Rekognition, DynamoDB, S3, IAM, CloudWatch, API Gateway and Lambda. See this json dump for configuration options.
Index and store a face print:
curl --location --globoff --request PUT 'https://api.rekognition.yourdomain.com/v1/index/Image-With-a-Face.jpg' \
--header 'x-api-key: YOUR-API-KEY' \
--header 'Content-Type: text/plain' \
--data '@'
Search an image for known faces:
curl --location --globoff --request PUT 'https://api.rekognition.yourdomain.com/v1/search/' \
--header 'x-api-key: YOUR-API-KEY' \
--header 'Content-Type: text/plain' \
--data '@/Users/mcdaniel/Desktop/aws-rekognition/test-data/Different-Image-With-Same-Face.jpg'
This is a fully automated build process using Terraform. The build typically takes around 60 seconds to complete. If you are new to Terraform then please review this Getting Started Guide first.
Configure Terraform for your AWS account. Set these three values in terraform.tfvars:
account_id = "012345678912" # your 12-digit AWS account number
aws_region = "us-east-1" # an AWS data center
aws_profile = "default" # for aws cli credentials
Build and configure AWS cloud infrastructure:
cd terraform
terraform init
terraform plan
terraform apply
- Highly secure. This project follows best practices for handling AWS credentials. The API runs over https using AWS managed SSL/TLS encryption certificates. The API uses an api key. User data is persisted to a non-public AWS S3 bucket. This api fully implements CORS (Cross-origin resource sharing). Backend services run privately, inside an AWS VPC, with no public access.
- Cost effective. In most cases the running cost of this API remains within AWS' free usage tier for most/all services.
- CloudWatch logs for Lambda as well as API Gateway.
- AWS serverless implementation using AWS API Gateway, AWS DynamoDB, and AWS Lambda.
- Meta data endpoint /info that returns a JSON dict of the entire platform configuration.
- Robust, performant and infinitely scalable.
- AWS API Gateway usage policy and managed api key.
- Preconfigured Postman files for testing.
- git. pre-installed on Linux and macOS
- make. pre-installed on Linux and macOS.
- zip pre-installed on Linux and macOS
- AWS account
- AWS Command Line Interface
- Terraform. If you're new to Terraform then see Getting Started With AWS and Terraform
- Python 3.11: for creating virtual environment used for building AWS Lambda Layer, and locally by pre-commit linters and code formatters.
- NodeJS: used with NPM for local ReactJS developer environment, and for configuring/testing Semantic Release.
Please see this detailed technical summary of the architecture strategy for this solution.
To get community support, go to the official Issues Page for this project.
This project demonstrates a wide variety of good coding best practices for managing mission-critical cloud-based micro services in a team environment, namely its adherence to 12-Factor Methodology. Please see this Code Management Best Practices for additional details.
We want to make this project more accessible to students and learners as an instructional tool while not adding undue code review workloads to anyone with merge authority for the project. To this end we've also added several pre-commit code linting and code style enforcement tools, as well as automated procedures for version maintenance of package dependencies, pull request evaluations, and semantic releases.
We welcome contributions! There are a variety of ways for you to get involved, regardless of your background. In addition to Pull requests, this project would benefit from contributors focused on documentation and how-to video content creation, testing, community engagement, and stewards to help us to ensure that we comply with evolving standards for the ethical use of AI.
For developers, please see:
- the Developer Setup Guide
- and these commit comment guidelines 😬😬😬 for managing CI rules for automated semantic releases.
You can also contact Lawrence McDaniel directly.