GestureVision is a real-time hand detection and tracking application built with React, TensorFlow.js, and the Handpose model. This application captures video from your webcam, detects hands, tracks their landmarks, and visualizes the results on a canvas overlay.
- Real-Time Hand Detection: Detect and track hands in real-time using the Handpose model.
- Landmark Visualization: Display detected hand landmarks and joint connections on a canvas overlay.
- Webcam Feed: Utilize webcam input for live hand tracking.
- Inverted Video Feed: Mirror the webcam feed to match typical webcam orientation.
To get started with GestureVision, follow these steps:
-
Clone the Repository
git clone https://github.com/StrangeCoder1729/GestureVision.git cd GestureVision
-
Install Dependencies
Ensure you have Node.js and npm installed. Then, install the project dependencies:
npm install
-
Start the Development Server
Run the following command to start the development server:
npm start
Open your browser and navigate to http://localhost:3000 to view the application.
- Allow Webcam Access: Grant the application permission to access your webcam when prompted.
- View Hand Tracking: The application will display the webcam feed with hand landmarks and connections overlaid in real-time.
App.js
: The main React component that handles webcam input, hand detection, and drawing on the canvas.utilities.js
: Contains thedrawHand
function for drawing hand landmarks and connections on the canvas.
This project is inspired by the work of Nicholas Renotte, who has created educational content and examples related to hand tracking and TensorFlow.js.
- React - A JavaScript library for building user interfaces.
- TensorFlow.js - An open-source library for machine learning in JavaScript.
- Handpose Model - A TensorFlow.js model for hand tracking.
- react-webcam - A React component for accessing the webcam.
Feel free to contribute to this project by submitting issues or pull requests!