Real-Time Sign Language Detection using LSTM
This project implements a real-time sign language detection system using LSTM Keras, Mediapipe, and Keypoints using MP Holistic. The system uses a deep learning model to classify American Sign Language (ASL) gestures in real-time from video input. The model is trained on a dataset of ASL gestures and achieves a high level of accuracy in classification.
- Run the
main.ipynb
- Hold your hand in front of the camera and form an ASL gesture.
- The program will display the detected gesture and its classification.
- If you don't want to capture the frames yourself for training. You can use the dataset under
MP_Data
.
Special thanks to my brother for recording the GIF :)