/American-Sign-Language-Detection-using-Computer-Vision

The project is about translating American Sign Language into English language. It uses Computer Vision and Deep Learning to predict the ASL alphabet and forms sentences on the basis of prediction. It uses text to speech to convert the predicted word into speech. The project was implemented at MNNIT Hack36 Allahabad Hackathon.

Primary LanguageJupyter Notebook

Hack36_Project_Team MISFITS

Hack36 MNNIT Allahabad Hackathon Submission

American Sign Language Detection using Deep Neural Networks

American Sign Language (ASL) is a visual language. With signing, the brain processes linguistic information through the eyes. The shape, placement, and movement of the hands, as well as facial expressions and body movements, all play important parts in conveying information. It is the primary language of many North Americans who are deaf and hard of hearing, and is used by many hearing people as well. The project can be used by dumb people to easily communicate with people who doesn't understand sign language. ASL

Formation of message using finger spelling in ASL

Fingerspelling is part of ASL and is used to spell out English words. In the fingerspelled alphabet, each letter corresponds to a distinct handshape. Fingerspelling is often used for proper names or to indicate the English word for something. We are using Convolutional Neural Networks to predict the sign language letter and combine those predicted letters to form the sentence to be conveyed. The message will then be converted from text to speech using Python's built-in support. The input will be provided in real time using the webcam.

Data Source

https://www.kaggle.com/grassknoted/asl-alphabet

Project Demo

https://www.youtube.com/watch?v=mIyWNsGfHAQ

Prediction of Alphabet A

WhatsApp Image 2020-02-16 at 6 15 27 AM

Prediction of Alphabet L

WhatsApp Image 2020-02-16 at 6 14 19 AM

Final Message

WhatsApp Image 2020-02-16 at 6 13 45 AM