Software Design Final Project 4 (Spring 2018) code and documentation.
Signum is a near real-time American Sign Language (ASL) translation tool that uses computer vision to recognize and track a user's gestures and then uses a learned model to identify the ASL character most closely correlated to that gesture. For more information, see our project website or look at our project poster.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system.
- Clone this repo to your local machine:
git clone https://github.com/Utsav22G/ASL-Translator.git
python recognition.py
runs just the computer vision hand gesture detectionpython3 live_demo.py
will run the output of the CV gesture recognition comparing against a pre-trained model.
To get the keyboard up and running, please upgrade your Linux dependecies:
sudo apt-get update
sudo apt-get upgrade
Run pip install -r requirements.txt
in order to download all the prerequisites necessary to run the program
For a better understanding of who we are and how Signum works, see our website. To see the program in live-action, watch this video. For a more visual representation of the components of Signum, see our project poster.
- OpenCV - Computer vision library
- Keras - Machine learning library
- gTTS - Google text-to-speech interface
- HTML5Up! - Used to generate project website
- BU ASLRP - Used to generate dataset of ASL images
Please read Contributing.md for details on our code of conduct, and the process for submitting pull requests to us.
Isaac Vandor, Utsav Gupta and Diego Berny
This project is licensed under the MIT License - see the LICENSE.md file for details
- Inspiration for this project comes from ASL Gloves.
- Thank you to the incredible researchers at Boston University for their work in developing an ASL Dataset.