/echosign

Gesture detection ai model build upon opencv, react and node

Primary LanguageTypeScript

EchoSign: Sign Language Gesture Detection and Voice Conversion

Overview: EchoSign is an AI-powered system built upon OpenCV, React, and Spring Boot that facilitates real-time sign language gesture detection and conversion into voice. This innovative application aims to bridge communication barriers between the deaf and hearing communities by enabling seamless interpretation of sign language gestures into audible speech.

Key Features:

  1. Real-time Gesture Detection: Utilizing computer vision techniques implemented with OpenCV, EchoSign accurately detects and interprets sign language gestures in real-time.
  2. Voice Conversion: The system converts detected sign language gestures into synthesized speech, enabling hearing individuals to comprehend the intended message.
  3. Web Interface: Built with React, EchoSign offers a user-friendly web interface accessible across various devices, ensuring convenience and accessibility.
  4. Backend Support: Powered by Spring Boot, the backend infrastructure ensures robustness, scalability, and efficient handling of gesture recognition and voice conversion processes.
  5. Customizable: Developers can extend and customize EchoSign's functionality to suit specific requirements and integrate additional features as needed.

System Requirements:

  • OpenCV
  • React
  • fastapi
  • Node.js
  • Web browser with HTML5 and WebSocket support

Installation:

  1. Clone the EchoSign repository to your local machine.
  2. Navigate to the frontend directory and install dependencies using npm install.
  3. Run the frontend server using npm start.
  4. Navigate to the backend directory and build the Spring Boot application.
  5. Before performing the gesture recognition, you should have python configured in your environment and required dependencies installed.
  6. you should install "mediapipe, tensorflow, numpy, vision, uvicorn, os, requests, fastapi, keras, scikit-learn" into your python environment.
  7. after installing all the requried python modules. navigate into the model folder.
  8. run the fastapi server . by writing "uvicorn learn:app".

Usage:

  1. Access the EchoSign web interface through your preferred web browser.
  2. Ensure your device's camera is enabled and positioned to capture sign language gestures.
  3. Perform sign language gestures within the camera's view.
  4. EchoSign will detect and interpret the gestures, converting them into audible speech in real-time.

Contributing: Contributions to EchoSign are welcome! Please follow these steps:

  1. Fork the repository.
  2. Create your feature branch (git checkout -b feature/YourFeature).
  3. Commit your changes (git commit -am 'Add YourFeature').
  4. Push to the branch (git push origin feature/YourFeature).
  5. Create a new Pull Request.

Authors:

  • Muhammad Nauman Chaudhry
  • Ahmad Ali Shahid
  • Yusra Zainab

Acknowledgments:

  • Special thanks to the OpenCV, React, and Spring Boot communities for their invaluable contributions.

Contact: For inquiries or support, please contact at our kashifshareef04@gmail.com.