xkunwu/depth-hand

evaluate on other dataset

Closed this issue · 5 comments

evaluate on other dataset

Great work!! Have you tested your method on other datasets? I saw a folder called nyu_hand. Can you tell me the results on NYU(mean error or other Metrics)? Thanks!!

I test the 'super_edt2m' model on NYU dataset. But the train phase stopped on epoch 2.
And i got this:
19-03-10 15:12:56 [INFO ] Break due to validation loss starts to grow: 30339.673275862056 --> 34615.77036206898
19-03-10 15:12:56 [INFO ] Total training time: 0:16:06.152846 for 2 epoches, average: 0:08:03.076423.
19-03-10 15:14:52 [INFO ] epoch evaluate mean loss: 25615.8134

The training stopped because validation complaint about divergence. That is mainly due to the different data format, which requires very different preprocessing steps and data storage. Currently I do not have the plan of implementing for NYU dataset, but any contribution is welcomed. Please take a look at the implementation for the BigHand dataset - a few copy/paste and adaptation would be fine to extend the algorithm, IMO.

Thanks for your reply. I will try to do this when I have time.
If i don't get it wrong, the result in your paper is tested on your own data which is splitted from the original training set. Have you tested on official test set?

I did not make it in time for the Hands17 challenge. The result is tested on the splits of training set - please also refer to the experiment section of the paper.