/BE_Final

This project is to develop an automatic Translation of Indian sign language, static single handed character signs. The system is designed to translate signs, i.e. each input image is obtained by video feed of Indian Sign Language (ISL) character sign. The proposed methodology comprises of four stages: Acquisition, Preprocessing, Feature Extraction and Classification.

Primary LanguagePython

BE_Final

This if my final year project at the Department of Telecommunication Engineering for Bachelor of Engineering at the Jawaharlal Nehru National College of Engineering - Shimoga.

A sign language is a language which uses manual communication and body language to convey meaning, as opposed to acoustically conveyed sound patterns. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. Sign language is used by hearing impaired people to communicate. This sign language is difficult and not understood by many people. Sign Language Recognition is helpful in communication between hearing impaired people and others who are not familiar with sign language. Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The main aim of this project is to develop an automatic Translation of Indian sign language, static single handed character signs. The system is designed to translate signs, i.e. each input image is obtained by video feed of Indian Sign Language (ISL) character sign. The proposed methodology comprises of four stages: Acquisition, Preprocessing, Feature Extraction and Classification. The approach of using bare hands allows the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences. This project shall provide the design approach on both software and hardware.