/ServeFacialAttributes

Torchserve services for facial attribute analysis

Primary LanguagePython

Facial Attributes Service

This project offers a suite of services similar to Microsoft Cognitive Services, but open source and built on top of TorchServe. It provides various facial attribute predictions, such as emotions, gender, head pose, face detection, and individual typology angle (ITA).

Updates

Getting Started

Prerequisites

Ensure you have Python 3.8+ and TorchServe installed on your machine. If not, you can install TorchServe by following the instructions here

Setting Up

Downloading Model Weights

To download the necessary model weights, run the following command:

python utils/download.py

Generating Model Archives

Generate .mar archives for deployment by executing:

bash generate_mares.sh

Starting the Service

Launch the TorchServe service with:

bash start_torchserve.sh

Verifying the Service

To verify that the service is running correctly, execute:

python processing_pipeline.py

Running the Stress Test

To run the stress test of service:

locust -f locustfile.py --host=http://localhost:8080 --web-port=[your_port]

After running the command, open a web browser and navigate to http://localhost:your_port to access Locust's web interface. From here, you can start the test, specify the number of users to simulate, the spawn rate, and monitor the performance metrics in real-time.

Testing

This project uses Nox for test automation and environment management. Nox allows you to run tests in isolated environments with specific dependencies, ensuring consistent and reliable test results.

Using Nox (Recommended)

  1. Install Nox if you haven't already:

    pip install nox
  2. Run all tests (including MiVOLO):

    nox
  3. Run specific test suites:

    • All tests except MiVOLO:

      nox -s tests
    • Only MiVOLO tests:

      nox -s mivolo_tests

Nox automatically manages the test environments and dependencies, ensuring that each test suite runs in its appropriate context.

Customizing Test Environments

Nox allows you to customize test environments for different parts of your project. You can modify the noxfile.py to add or change test environments as needed. For example, the MiVOLO tests use a separate environment with specific dependencies:

@nox.session(name="mivolo_tests", venv_backend="venv")
def mivolo_tests(session):
    # Install pytest
    session.install("pytest")

    # Install the current project and its dependencies
    session.install("-r", "requirements.txt")

    # Install specific requirements for MiVOLO
    session.install("-r", "models/age/age_requirements.txt")

    # Run MiVOLO tests
    session.run("pytest", "-vv", "tests/handler_tests/mivolo_handler_test.py")

Note: Assembly of environments for specific handlers can require a notable amount of time.

For more information on how to customize test environments and sessions, refer to the Nox documentation.

Using pytest directly (Alternative)

While Nox is the recommended approach, you can still run tests directly using pytest if needed:

  1. Run all tests:

    pytest -vv tests/handler_tests
  2. Check test coverage:

    pytest --cov=src --cov-report=term-missing -vv tests/handler_tests
  3. Generate an interactive HTML coverage report:

    pytest --cov=src --cov-report=term-missing --cov-report=html -vv tests/handler_tests/

Running with Docker

Building the Docker Image

To build the Docker image for the Facial Attributes Service, navigate to the root directory of the project and run the following command:

docker build . -t serve-facial-attributes -f deployment/Dockerfile

This command builds a Docker image named serve-facial-attributes using the Dockerfile located in the deployment directory.

Running the Service in a Docker Container

After building the image, you can start the service in a Docker container using the following command:

docker run --gpus all -p 8080:8080 -p 8081:8081 -p 8082:8082 serve-facial-attributes

This command runs the Docker container with GPU support enabled (make sure your Docker setup supports GPUs), mapping ports 8080, 8081 and 8082 from the container to the host.

API Reference

Health Check

Check if the service is running:

curl http://localhost:8080/ping

Making Predictions

To get predictions from a specific model:

curl -X POST http://localhost:8080/predictions/{model_name} -T {path_to_image}

Model List

The table below summarizes the currently implemented models in the service:

Model Name Model Type Output Description Additional Info
beard Classification Probabilities for beard presence: no_beard, beard Binary classifier trained on CelebA to detect facial hair presence.
baldness Classification Probabilities for baldness: not_bald, bald Binary classifier trained on CelebA for baldness detection.
gender Classification Probabilities for gender: Female, Male Binary classifier trained on CelebA and FairFace for gender classification.
glasses Classification Probabilities for glasses presence: no_glasses, glasses Binary classifier trained on CelebA and Glasses or No Glasses for glasses detection.
happiness Classification Probabilities for happiness: not_happy, happy Binary classifier trained on FER2013 and AffectNet for happiness detection.
emotions Classification Probabilities for emotions: angry, disgust, fear, happy, neutral, sad, surprise Multiclass classifier trained on FER2013 and AffectNet for emotion classification.
face_detection Detection Bounding boxes for detected faces YOLOv8-nano model for face detection. Model available here.
headpose Regression Yaw, pitch, and roll angles of the head Regression model 6DRepNet360 for head pose estimation.
ita Calculation Individual Typology Angle (ITA) value Calculator for Individual Typology Angle.
race Classification Probabilities for race: Black, East Asian, Indian, Latino_Hispanic, Middle Eastern, Southeast Asian, White Multiclass classifier trained on FairFace for race classification.
age Regression Estimated age in years SOTA regression model built upon MiVOLO project for age estimation.
dlib_face_segmentation Segmentation Base64-encoded segmentation mask The "classical" approach, built upon dlib facial landmarks predictor with 81 points, which uses cropping for segmentation.
deeplab_face_segmentation Segmentation Base64-encoded segmentation mask The advanced approach, utilizing the DeepLabV3Plus architecture, trained on the CelebAMask-HQ dataset for enhanced performance.
apparent_skincolor Calculation Luminance (lum), hue, luminance standard deviation (lum_std), hue standard deviation (hue_std), a* values, b* values Face skin color evaluator built using the method provided in Beyond Skin Tone: A Multidimensional Measure of Apparent Skin Color paper.
arcface Face Recognition 512-dimensional face embedding vector ArcFace model for face recognition, built upon the PyTorch implementation. Can be used for face verification and identification tasks.