google/mannequinchallenge

will you share the MannequinChallenge Dataset?

Lvhhhh opened this issue · 2 comments

will you share the MannequinChallenge Dataset?
and i wonder the details of trainining the 3input model

fcole commented

We've made the list of video ids available now at: google.github.io/mannequinchallenge. We don't have plans to release any MVS depth data, unfortunately.

We've made the list of video ids available now at: google.github.io/mannequinchallenge. We don't have plans to release any MVS depth data, unfortunately.

fine. Beside the training data, what is the difference of your 3 input monocular model and the network in "Single-Image Depth Perception in the Wild" you referred . because your result is better than the latter. do you have some magic code? i want to learn more about the training details in just 3 input monocular model . can you give me some details?