725efe81-3c4f-44df-a841-e0d428b3b026_M3xVMf17.mp4
- This is a project that showcases finetuning a model and performing gesture recognition of 21 different gestures using Mediapipe from Google.
- The notebook shows how I trained the baseline model that achieved 83% accuracy and two finetuned models that achieved 88% accuracy all on the test set.
- The file gesture_recognition.py contains the code base to put the models to use using a live webcam feed. Scroll below to the usage section.
- The file audio_controls.py contains the code to control the computer's audio functions.
- The file hands_landmark.py is an experimental code snippet that uses hand landmarks from Mediapipe to recognize gestures and if statement to execute a print function whenever the gesture is detected(it doesnt use a pretrained model.
- The dataset is a combination of two datasets and you can get it here.
A sample of the data in the dataset
- Clone the repository:
git clone https://github.com/KevKibe/Gesture-Recognition-using-Mediapipe.git
- Install dependencies:
pip install -r requirements.txt
- Run the application by running the command
py gesture_recognition.py
in the terminal. - Test out with different gestures.
- To close the application press the ESC key.
⚡ I'm currently open for roles in Data Science, Machine Learning, NLP and Computer Vision.