This project demonstrates the use of YOLOv8 for real-time sign language detection. It leverages a dataset from Roboflow Universe to train the model and achieve accurate detection of various sign language gestures.
- Real-time Detection: The model processes video frames efficiently, enabling real-time detection of sign language gestures.
- Accurate Recognition: Trained on a diverse dataset, the model effectively recognizes a range of sign language signs.
- Compatibility with YOLOv8: Built using YOLOv8, a state-of-the-art object detection model, for optimal performance.
- Roboflow Integration: Leverages Roboflow for dataset management and conversion, streamlining the development process.
- Source: Roboflow Universe. Link to dataset: sign_recognition Computer Vision Project
- Format: YOLOv8 compatible format
- YOLOv8: Employs the YOLOv8 model architecture for object detection.
- Dataset Download: Obtained the dataset from Roboflow Universe in YOLO-v8 format.
- Model Training: Trained the YOLOv8 model on the converted dataset.
- Python
- PyTorch
- YOLOv8
- Roboflow (for dataset management and dataset)
- Clone this repository.
- Get dataset from roboflow universe.
- Download the pre-trained model weights.
- Run the notebook named
instance_segmentation_sign_recognition.ipynb
- Results are stored in result_dir directory. Feel free to check the result.
- sample predictions are stored in prediction_samples directory
- Explore model optimization techniques to enhance speed and accuracy.
- Expand the dataset to include a broader range of sign language gestures.
- Integrate the model into applications for real-world sign language interpretation.
I welcome contributions to this project!
This project is licensed under the MIT License. See the LICENSE file for details.
For any queries or feedback, please reach out to (Mukund Kumar/mukundwh8@gmail.com)