SignifySage is an innovative system designed to empower the deaf community by accurately recognizing and interpreting sign language gestures. This project combines a user-friendly React frontend with a Flask-based backend, featuring a custom-made Convolutional Neural Network (CNN) model.
-
Effortless Communication: SignifySage facilitates seamless communication between the hearing-impaired and others through precise sign language gesture recognition.
-
Custom CNN Model: We've developed a robust CNN model tailored for accurate sign language recognition, ensuring reliable results.
-
User-Friendly Interface: The React-based frontend provides an intuitive user experience, making it accessible to both experts and newcomers.
Our project is built upon a meticulously collected and curated dataset of sign language gestures. This dataset ensures the model's proficiency and reliability in interpreting a wide range of gestures.
You can access the trained CNN model on Google Drive. Make sure to follow the instructions in the README to use the model effectively.
-
Clone this repository.
-
Install dependencies for both the frontend and backend using
npm install
andpip install -r requirements.txt
, respectively. -
Run the frontend and backend servers.
-
Access the application via your browser.
-
Access the sign language gesture recognition feature.
-
Perform a sign language gesture in front of your camera, and let SignifySage translate it into text.
We welcome contributions! If you'd like to contribute to SignifySage, please go ahead.
- The SignifySage team extends its gratitude to the open-source community for their support.
Have questions or feedback? Feel free to reach out to us at kashishgarg89.5@gmail.com.
We hope SignifySage enhances communication and connectivity for the deaf community. Thank you for using our application.