hongsukchoi/Pose2Mesh_RELEASE

Confused about the performance of Pose2mesh on Human3.6M

Cakin-Kwong opened this issue · 6 comments

The performance of Pose2mesh on Human3.6M:
Training with Human3.6M:
MPJPE:64.9
PA-MPJPE:48.0

Training with Human3.6M and COCO:
MPJPE:67.9
PA-MPJPE:49.9

Best result:
MPJPE:64.9
PA-MPJPE:46.3

As mentioned in the paper,using more datasets to train Pose2mesh will decrease the performance on Human3.6M. I wonder the best result on Human3.6M is supposed to be trained with Human3.6M dataset only? In this case, the best result should be the same with the one trained with Human3.6M. Or it should be trained with Human3.6M+COCO+MuCo? Would you please show me the training settings on Human3.6M?

the best result should be the same with the one trained with Human3.6M

I think so, as train and test sets of Human3.6M have the same action category and thus similar poses. The discussion is in the paper.

The training settings are at asset/yaml/

the best result should be the same with the one trained with Human3.6M

I think so, as train and test sets of Human3.6M have the same action category and thus similar poses. The discussion is in the paper.

Thanks for your quick reply. And as the best result should trained with Human3.6M. I think Table 5 and Table 8 should report the same PA-MPJPE?Is this a miswriting?
paper_result

No. Actually the explanation is also in the paper.

When computing PA-MPJPE of Table 5, I used all camera images (4 in Human3.6M), which I think natural.

In Table 8, for PA-MPJPE, I used only the frontal camera image for fair comparison with previous works.

We measured the PA-MPJPE of Pose2Mesh on Human3.6M by
testing only on the frontal camera set, following the previous works [23, 27, 28].

No. Actually the explanation is also in the paper.

When computing PA-MPJPE of Table 5, I used all camera images (4 in Human3.6M), which I think natural.

In Table 8, for PA-MPJPE, I used only the frontal camera image for fair comparison with previous works.

We measured the PA-MPJPE of Pose2Mesh on Human3.6M by
testing only on the frontal camera set, following the previous works [23, 27, 28].

That solved my confusion. So in Table 8, the training set of Human3.6M is still [1,5,6,7,8] and the testing set is [9, 11] but only use the frontal camera set(camera 3)? Would you mind sharing the code to get this test set? Or how should I modify the code in Pose2mesh?

  • So in Table 8, the training set of Human3.6M is still [1,5,6,7,8] and the testing set is [9, 11] but only use the frontal camera set(camera 3)?

Yes.

  • Would you mind sharing the code to get this test set? Or how should I modify the code in Pose2mesh?

You just need to skip the cam annotation '4', when loading test set.
Like this

if cam != '4':  # front camera (Table 6)
    continue
  • So in Table 8, the training set of Human3.6M is still [1,5,6,7,8] and the testing set is [9, 11] but only use the frontal camera set(camera 3)?

Yes.

  • Would you mind sharing the code to get this test set? Or how should I modify the code in Pose2mesh?

You just need to skip the cam annotation '4', when loading test set. Like this

if cam != '4':  # front camera (Table 6)
    continue

Thanks, I have got the paper result followed your suggestion.