Visit the live Web app - Sign Language Recognition
This is a web-based model made for the people who can not hear and talk. This model can help these people to convey there messages to the ones who can not understand sign language.
About : The data set is a collection of images of alphabets from the American Sign Language, separated in 29 folders which represent the various classes.
Content : The training data set contains 87,000 images which are 200x200 pixels. There are 29 classes, of which 26 are for the letters A-Z and 3 classes for SPACE, DELETE and NOTHING. These 3 classes are very helpful in real-time applications, and classification. The test data set contains a mere 29 images, to encourage the use of real-world test images.
To make this app work on your local device
- Fork this repo
- Setup all the files locally
- Install these packages
tensorflow==2.7.0
pillow==8.2.0
streamlit==0.82.0
- Then run the following command in the base directory
streamlit run app.py
Predictions made by the model:
Wrong Predictions made by the model:
Web app: