ASL Detection with CNN

Welcome to the ASL Detection with CNN project! Inspired by the need to bridge the communication barrier between the deaf community and individuals who may not understand sign language, this project aims to recognize and interpret hand gestures representing alphabets in American Sign Language using computer vision techniques and Convolutional Neural Networks (CNN).

image

Project Overview

The ASL Detection with CNN project is designed to:

  • Detect hand gestures representing alphabets in real-time.
  • Convert detected gestures into corresponding text.
  • Provide speech output for improved accessibility.

Features

  • Real-time Detection: Utilizes computer vision algorithms and CNN for real-time hand gesture detection.
  • Alphabet Recognition: Recognizes hand gestures representing alphabets from A to Z.
  • Text Conversion: Converts detected gestures into corresponding textual representation.
  • Speech Output: Provides speech output for enhanced accessibility, enabling users to understand sign language gestures audibly.
  • Additional Libraries: Utilizes Enchant for word suggestions, MediaPipe for hand tracking, Tkinter for GUI, and PyTorch for deep learning tasks.

CNN Model

The ASL Detection with CNN project utilizes a pre-trained Convolutional Neural Network (CNN) model trained on a dataset of hand gestures representing alphabets in American Sign Language.

Technologies Used

  • Python: Programming language used for development.
  • OpenCV: Library used for computer vision tasks.
  • TensorFlow: Framework used for deep learning model development.
  • PyTorch: Framework used for deep learning tasks.
  • Tkinter: Library used for GUI development.
  • MediaPipe: Library used for hand tracking.
  • Enchant: Library used for word suggestions.