Action Recognition using MediaPipe and OpenCV

Overview

This project demonstrates action recognition using MediaPipe and OpenCV. It utilizes MediaPipe's Hand and Pose models to detect hand and body gestures, which are then classified into predefined actions using a machine learning model. The predicted actions include "three", "sign", "heart", "thanks", and "hello".

Prerequisites

  • Python 3.x
  • OpenCV
  • MediaPipe

Installation

  1. Clone the repository:

    git clone <repository_url> `
    
  2. Navigate to the project directory:

    bashCopy code

    cd <project_directory>

  3. Install dependencies:

    bashCopy code

    pip install -r requirements.txt

Usage

  1. Run the action recognition script:

    bashCopy code

    python action_recognition.ipynb

  2. Follow the instructions to perform hand and body gestures in front of the camera.

  3. The script will classify the detected gestures into predefined actions, including "three", "sign", "heart", "thanks", and "hello".

Predicted Actions

  • three: Gesture representing the number three.
  • sign: Hand gesture representing a sign.
  • heart: Hand gesture representing a heart shape.
  • thanks: Gesture representing the word "thanks".
  • hello: Gesture representing the word "hello".

Contributing

Contributions are welcome! Please submit pull requests or open issues for any improvements or bug fixes.

License

This project is licensed under the MIT License - see the LICENSE file for details.