how did you split train and test for MPII
nikhilchh opened this issue · 3 comments
You have provided json format for mpii train and test.
how is the split created ?
I followed the official split.
Independent questions:
1- How do you handle different datasets ?
As every other data has different set of keypoints and different sequence.
For example COCO is different from MPII.
I assume you would fix the network to output N keypoints which are common among all the datasets.
And then write converters to bring all the dataset's to same page (same set and sequence of keypoints).
I wrote a lot of code which was very specific to COCO and now it seems like a lot of work to adapt for MPII.
Would love to know your opinion on this topic.
2- Does most of the datasets provide 12 2d keypoints for shoulders, elbows, wrists, hips, knees and ankle ?
3-Is it very important to take care of subtle differences in the way different datasets label a certain keypoint ? Like shoulder from COCO might be a little off compared to shoulder from MPII ?
- In this repo, I used keypoint set of 3D datasets. See https://github.com/mks0601/3DMPPE_POSENET_RELEASE/blob/master/data/dataset.py
- yes
- It seems not really important for Human3.6M and MuPos-TS dataset evaluations.