Pytorch implementation of Realtime_Multi-Person_Pose_Estimation
Original version folked from https://github.com/last-one/Pytorch_Realtime_Multi-Person_Pose_Estimation
Train
-
prepare training data
- Dowload COCO_train2014 val2014 from official website
- cd ./preprocessing
- configure generate_json_mask.py (ann_dir .... )
- run
-
start training
- cd ./experiments/baseline/
- configure coco_loader (line 198 img_path)
- configure train_pose.py (--train_dir )
- run
Test and eval
- test single image
- ./evaluation/test_pose.py
- evaluate caffemodel dowload from authur
- ./evaluation/eval_caffe.py 53.8% (50 images)
- evaluate pytorch model converted from caffemodel
- ./preprocessing/convert_model.py
- ./evaluation/eval_pytorch.py 54.4% (50 images) 54.1% (1000 images)
- evaluate pytorch model trained by yourself
- ./evaluation/eval_pytorch.py
results
- caffemodel evaluated by python scripts
- 53.8% (50 images)
- pytorch model converted from caffe by python scripts
- 54.4% (50 images) 54.1% (1000 images)
- pytorch model trained on train2014
- 45.9% (50 images) 60000 iters (stepsize = 50000)
experiments <1> mechanism (eval_mechanism)
- heatmap, vecmap generated by GT -> post-processing = 60.8%
- add redundant connection 60.8%->67.7%
- sigma (7-9) 67.7%->68.8%
- replace all_peaks with keypoints gt 68.8->78% (influence single or multi?)
<2> VGG_1branch (./experiments/1branch)
- merge L1 branch and L2 branch
- trainset: valminusminival2014 testset: minival2014
- 42.9%