/American-Sign-Language-Detection

A real-time American Sign Language (ASL) detection system using computer vision and deep learning. This project uses a combination of OpenCV, MediaPipe, and TensorFlow to detect and classify ASL hand signs from camera input. The system can recognize a wide range of ASL characters, and can be used to facilitate communication for sign language users.

Primary LanguageJupyter NotebookMIT LicenseMIT

Overview

This ASL Detector is a cutting-edge AI-powered application that uses computer vision and deep learning to recognize and classify American Sign Language (ASL) characters in real-time. This application utilizes the device's camera to capture hand landmarks and coordinates, which are then processed by a deep learning model to identify the corresponding ASL character.

Usage

By default, when you launch app.py, the inference mode is active. It can also be manually activated in other modes by pressing “n”.

Table of Contents

  1. Features
  2. Requirements
  3. Installation
  4. Model Training
  5. Contributing
  6. License

Features

  • Real-time ASL detection using the device's camera.
  • Accurate classification of ASL characters using a deep learning model.
  • Hand landmark tracking for precise gesture recognition.
  • Support for a wide range of ASL characters and phrases.
  • High accuracy and robustness in varying lighting conditions.

Requirements:

  • OpenCV
  • MediaPipe
  • Pillow
  • NumPy
  • Pandas
  • Seaborn
  • Scikit-learn
  • Matplotlib
  • Tensorflow

Important

If you face an error during training from the line converting to the tflite model, use TensorFlow v2.16.1.

Installation:

  1. Clone the Repository:
git clone https://github.com/AkramOM606/American-Sign-Language-Detection.git
cd American-Sign-Language-Detection
  1. Install Dependencies:
pip install -r requirements.txt
  1. Run the Application:
python main.py

Model Training

If you wish to train the model on your dataset, follow these steps:

Data Collection

  1. Manual Key Points Data Capturing

Activate the manual key point saving mode by pressing "k", which will be indicated as “MODE: Logging Key Point”.
If you press any uppercase letter from “A” to “Z”, the key points will be recorded and added to the “model/keypoint_classifier/keypoint.csv” file as demonstrated below.

Note

Each time you press the uppercase letter a single entry point is appended to keypoint.csv.

  1. Automated Key Points Data Capturing

Activate the automatic key point saving mode by pressing "d", which will change the content of the camera window to an image of OM606.


Note

You need to specify the dataset directory in app.py

Training

Launch the Jupyter Notebook "keypoint_classification.ipynb" and run the cells sequentially from the beginning to the end.
If you wish to alter the number of classes in the training data, adjust the value of "NUM_CLASSES = 26" and make sure to update the labels in the "keypoint_classifier_label.csv" file accordingly.


Model Structure

The following is the image of the model structure that was prepared in the "keypoint_classification.ipynb" notebook.

Contributing

We welcome contributions to enhance this project! Feel free to:

  1. Fork the repository.
  2. Create a new branch for your improvements.
  3. Make your changes and commit them.
  4. Open a pull request to propose your contributions.
  5. We'll review your pull request and provide feedback promptly.

License

This project is licensed under the MIT License: https://opensource.org/licenses/MIT (see LICENSE.md for details).