/Realtime_Multi-Person_Pose_Estimation.PyTorch

Pytorch implementation of Realtime_Multi-Person_Pose_Estimation

Primary LanguagePython

Realtime_Multi-Person_Pose_Estimation.PyTorch

Pytorch implementation of Realtime_Multi-Person_Pose_Estimation

Original version folked from https://github.com/last-one/Pytorch_Realtime_Multi-Person_Pose_Estimation

Train

  1. prepare training data

    • Dowload COCO_train2014 val2014 from official website
    • cd ./preprocessing
    • configure generate_json_mask.py (ann_dir .... )
    • run
  2. start training

    • cd ./experiments/baseline/
    • configure coco_loader (line 198 img_path)
    • configure train_pose.py (--train_dir )
    • run

Test and eval

  1. test single image
    • ./evaluation/test_pose.py
  2. evaluate caffemodel dowload from authur
    • ./evaluation/eval_caffe.py 53.8% (50 images)
  3. evaluate pytorch model converted from caffemodel
    • ./preprocessing/convert_model.py
    • ./evaluation/eval_pytorch.py 54.4% (50 images) 54.1% (1000 images)
  4. evaluate pytorch model trained by yourself
    • ./evaluation/eval_pytorch.py

results

  1. caffemodel evaluated by python scripts
    • 53.8% (50 images)
  2. pytorch model converted from caffe by python scripts
    • 54.4% (50 images) 54.1% (1000 images)
  3. pytorch model trained on train2014
    • 45.9% (50 images) 60000 iters (stepsize = 50000)

experiments <1> mechanism (eval_mechanism)

  1. heatmap, vecmap generated by GT -> post-processing = 60.8%
  2. add redundant connection 60.8%->67.7%
  3. sigma (7-9) 67.7%->68.8%
  4. replace all_peaks with keypoints gt 68.8->78% (influence single or multi?)

<2> VGG_1branch (./experiments/1branch)

  1. merge L1 branch and L2 branch
  2. trainset: valminusminival2014 testset: minival2014
  3. 42.9%