/SignSpeak

"Signspeak" is a cutting-edge machine learning project that utilizes computer vision to recognize sign language gestures captured by a camera and interpret it into english.

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

SignSpeak : By "The Krekheds"

SignSpeak is a machine learning project developed for "Bano Qabil" that aims to analyze videos and images containing sign language gestures and interpret them into English text. This project leverages computer vision and deep learning techniques to recognize and understand sign language, making it accessible to a wider audience..

Features

  1. Video and Image Analysis: SignSpeak is designed to analyze both videos and images, providing flexibility in input sources.

  2. Sign Language Recognition: The core functionality of SignSpeak involves recognizing and interpreting various sign language gestures.

  3. Text-To-Speech Output: The interpreted sign language is converted into English language text with the Text-To-Speech mechanism, allowing users to understand the content.

Technologies Used

  • Python: The project is implemented using the Python programming language.
  • OpenCV: Computer vision tasks are handled using the OpenCV library for image and video processing.
  • MediaPipe Integration: The project leverages the MediaPipe library for robust hand and pose detection, a crucial component for sign language interpretation.
  • GUI (Graphical User Interface): The graphical user interface is implemented using the pygame library.

Snapshots

3 1 2

Created by: