/Sign-Language-Recognition

Sign Language Recognition System.

Primary LanguageJupyter NotebookMIT LicenseMIT

Sign-Language-Recognition

Sign Language Recognition System.

Method:

Trained a Convolutional Neural Network (CNN) to identify the signs represented by each of these images. Feature engineered this data to get useful relative motion data which was then trained on classical classification models to identify the specific sign pertaining to each LMC input.

1.Trained two different CNN models(one for Digits and another one for Alphabets).

  1. It recognize the signs and convert them into speech(text to speech), so in the end one can easily see the recognized text on the screen and can also hear the the text recognized .

Steps:

  1. Run predict.py file to test the system.

Applications:

Our proposed system will help the deaf and hard-of-hearing communicate better with members of the community. For example, there have been incidents where those who are deaf have had trouble communicating with first responders when in need.

Another application is to enable the deaf and hard-of-hearing equal access to video consultations, whether in a professional context or while trying to communicate with their healthcare providers via telehealth. Instead of using basic chat, these advancements would allow the hearing-impaired access to effective video communication.

Performance:

The proposed model for the still images is able to identify the static signs with an accuracy of 95% .