/sign-language-app

The proposed system utilizes CNN algorithm to recognize and interpret hand signals of American Sign Language (ASL) into written output. An Android application is developed to implement this system, which can also convert the text into speech. This aims to bridge the communication gap for individuals who are hearing impaired, dumb, or speechless.

Primary LanguageJava

Sign-language-app

Abstract

Sign language is a way of transfer feelings and our thoughts non-verbally. People who are hearing impaired, dumb or speechless use sign language as their primary means of communication. For communication these people apply gestures which are based on hand signals to share their ideas. Sadly, the overwhelming most of the individuals aren't awake to the linguistics of those gestures. In a try to overcome these gaps, we offer real time Sign Language identification system which is based on the American Sign Language (ASL) Dataset. The system we proposed uses the CNN (Convolutional Neural Network) algorithm to recognize and interpret static hand signals of letters relating to American Sign Language into written output. An Android based application is produced for this system and also further it can convert the text into Speech.

Problem Statement

Many people with speech disability experience a drop in self-esteem and confidence because of their impaired ability to communicate with other people.� To Develop real time solutions for interpretation of speech via a sign language interpreter and provide synthesize voice output, text to speech and vice versa functionality.

Steps to Implement the Project

1. Download the ASL dataset from the kaggle. 2. Run the cells of the ipynb file by setting the proper path of the dataset folder 3. After the creation of the training and testing data. The third step is of creating a model for training. Here, I have used Convolutional Neural Network(CNN) for building this model.