Come here, don’t you star this progect? & Forgive my pool English.
Welcome to star this repo!
Mid-air brush [Demo]
Mid-air gesture recognition and drawing, the default gesture 1 is a brush, gesture 2 is to change the color, and gesture 5 is to clear the drawing board Display based on OpenCV.
This version of the project is based on GA_Data_Science_Capstone
Use Yolo_v5 to recognize gestures and index fingers for drawing. Please make your own gesture dataset and label them. Data preprocessing is in files 01 and 02. The project can be run on Raspberry Pi, use the Raspberry Pi to collect images and push them to the computer for reasoning, there is a delay.
cd v3.0
pip install -r requirements.txt
jupyter notebook
# open and run 01_image_processing_and_data_augmentation.ipynb
# run labelImg to label data 1, 2, 5, forefinger
python 02_munge_data.py
# train model
python train.py --img 512 --batch 16 --epochs 100 --data config.yaml --cfg models/yolov5s.yaml --name yolo_example
tensorboard --logdir runs/
# run use pc cam
python detect.py --weights weights/best.pt --img 512 --conf 0.3 --source 0
# run use raspi
# run on raspi
sudo raspivid -o - -rot 180 -t 0 -w 640 -h 360 -fps 30|cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8080}' :demux=h264
# run on pc
python detect.py --weights runs/exp12_yolo_example/weights/best.pt --img 512 --conf 0.15 --source http://192.168.43.46:8080/
Gesture recognition based on OpenCV and convex hull detection. Skin color detection + convex hull + number of contour lines (count the number of fingers).
cd v2.0
python gesture.py
Skin color detection + convex hull based on OpenCV.
cd v1.0
python main.py