A LSTM network which can translate sign language to text in real-time using sequential frames
Currently according to the World Health Organization 300 million are deaf and 1 million are dumb. It is expected that by 2050, there could be over 450 million . The impacts of it are broad and can be profound. They include a loss of the ability to communicate with others delayed language development in children, which can lead to social isolation and loneliness. Many areas lack sufficient accommodations for them, which effect academic performance and options for employment. Children with hearing loss and deafness in developing countries rarely receive any schooling. WHO estimates that unaddressed hearing loss costs the global economy US$ 980 billion annually due to health sector costs (excluding the cost of hearing devices), costs of educational support, loss of productivity and societal costs.
In this contemporary era, with the help of technological advancement, there are many mechanism to help deaf and dumb people. But one of the main porblem which continues is the difficulties in communicating with them.
Developed a LSTM network which can translate sign language to text in real-time using OpenCV. The model uses sequential frames of an action and predict the action in real time. The model has a accuracy of 96.47% . Mediapipe, an open-source framework is used for extration of key points and detecting the sign