Utilized Python programming language to develop a Deep Learning algorithm (CNN) for real-time recognition of American Sign Language (ASL) gestures, achieving an accuracy rate of 95% in converting to English alphabets. The project includes a user-friendly web interface created with HTML, CSS, and Bootstrap, and the model is deployed using Flask.
This project demonstrates expertise in machine learning and web development by successfully integrating a Deep Learning algorithm into a web application, showcasing innovation in technology solutions. The web application allows users to recognize ASL gestures in real-time and convert them into English alphabets.
- Real-time recognition of ASL gestures.
- High accuracy rate of 95%.
- User-friendly web interface.
- Interactive design using HTML, CSS, and Bootstrap.
- Model deployment using Flask.
- Python 3.x
- Flask
- TensorFlow
- OpenCV
- HTML, CSS, Bootstrap
-
Clone the repository:
git clone https://github.com/yourusername/asl-gesture-recognition.git cd asl-gesture-recognition
-
Install the required Python packages:
pip install -r requirements.txt
-
Run the Flask application:
python app.py
-
Open your web browser and navigate to
http://127.0.0.1:5000/
.
- Launch the web application by following the installation steps.
- Use your camera to show ASL gestures in front of the webcam.
- The application will recognize the gesture and display the corresponding English alphabet in real-time.
Contributions are welcome! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature-name
- Commit your changes:
git commit -m 'Add some feature'
- Push to the branch:
git push origin feature-name
- Create a pull request.
If you have any questions, feel free to reach out:
- GitHub: Shivam Tiwari
- Developed by Shivam Tiwari
- Used libraries and frameworks: TensorFlow, Flask, OpenCV, HTML, CSS, Bootstrap