This project uses a custom-trained YOLOv11 model with 40 classes to detect hand signs for sign language recognition. The project employs OpenCV for real-time inference through webcam feed.
- Python 3.x
- OpenCV
- PyTorch
- Ultralytics YOLO (for ONNX model support)
-
Clone the repository:
git clone https://github.com/alihassanml/Yolo11-sign-language-detection.git cd Yolo11-sign-language-detection
-
Install dependencies:
pip install -r requirements.txt
-
Ensure that
best.onnx
(the trained model) is in the project directory.
Use the following code to run real-time sign language detection from your webcam:
from ultralytics import YOLO
import cv2
model = YOLO('best.onnx')
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
print("Failed to grab frame.")
break
results = model(frame)
result = results[0]
annotated_frame = result.plot()
cv2.imshow('YOLO Inference', annotated_frame)
if cv2.waitKey(1) == 27:
break
cap.release()
cv2.destroyAllWindows()
Run the script with:
python sign_language_detection.py