Results on human3.6m dataset
Albert-LZG opened this issue · 5 comments
Hi @JimmySuen,
First, thanks for your share. Now, I want to reproduce the paper's result on the Human3.6M dataset. Finally, I get a competitive result on protocol1 (PA MPJPE 40.96 vs paper's 40.6), but with a bad result on protocol2(MPJPE 53.5 vs paper's 49.6). I use the provided config file 'd-mh_ps-256_deconv256x3_min-int-l1_adam_bs32-4gpus_x300-270-290/lr1e-3.yaml'. Some differences in my experiments including a) I use lr1e-4 instead of 1e-3 to avoiding loss divergency. b) due to the lack of .txt format label file, I generate the cache file from the .cdf file from the official dataset. So, what's the problem and what I should do to reproduce the protocol2's result? Thank you.
@Albert-LZG Hi, you could try fewer epoch(e.g 140-90-120), or tune other hyper-parameters. To my memory, learing rate should be lr1e-3 to get best result. What's more, inspect learning curve to see what happens.
@lck1201 I have tried some hyper-parameters, including learning rate, the scale of the bounding box, rect_3d_height&rect_3d_width and so on. But get similar or worse results. For learning rate, I tried 1e-3 twice and both meet loss divergency. BTW, can you provide the model checkpoint file?
@Albert-LZG Now we don't plan to release further model or checkpoint file.
Close issue, re-open if you have further questions
@Albert-LZG
I am trying to reproduce the paper on standard h36m dataset
Could you tell me how do you prepare dataset?