This project demonstrates action recognition using MediaPipe and OpenCV. It utilizes MediaPipe's Hand and Pose models to detect hand and body gestures, which are then classified into predefined actions using a machine learning model. The predicted actions include "three", "sign", "heart", "thanks", and "hello".
- Python 3.x
- OpenCV
- MediaPipe
-
Clone the repository:
git clone <repository_url> `
-
Navigate to the project directory:
bashCopy code
cd <project_directory>
-
Install dependencies:
bashCopy code
pip install -r requirements.txt
-
Run the action recognition script:
bashCopy code
python action_recognition.ipynb
-
Follow the instructions to perform hand and body gestures in front of the camera.
-
The script will classify the detected gestures into predefined actions, including "three", "sign", "heart", "thanks", and "hello".
- three: Gesture representing the number three.
- sign: Hand gesture representing a sign.
- heart: Hand gesture representing a heart shape.
- thanks: Gesture representing the word "thanks".
- hello: Gesture representing the word "hello".
Contributions are welcome! Please submit pull requests or open issues for any improvements or bug fixes.
This project is licensed under the MIT License - see the LICENSE file for details.