google/mannequinchallenge

Output depth data format

arvkr opened this issue · 5 comments

arvkr commented

Hi,
Thanks for sharing the inference code. When the model infers the depth for the code from a single image, is the estimated depth in meters? What format is it exactly? Since the ground truth info is not there, I am not able to figure this out directly. Thank you.

fcole commented

The model estimates depth up to an unknown scale parameter, so the units themselves are not that meaningful. The error metrics we use for evaluation measure the accuracy of the depth map up to scale. This is a consequence of the training data (multi-view stereo) also having a scale ambiguity.

Hi fcole,
Do you mean that a depth map predicted by the pre-trained model is scaled by an unknown factor, in comparison with the "depth ground truth " ?

Hi, is the depth image predicted by the network a 32-bit continous floating-point image? Or is it just an 8-bit image?

fcole commented

Yes, the output is a floating-point value. Each output map is scaled by an unknown factor relative to the ground truth (i.e., it's not in units of meters or anything like that).

Thanks for your reply. I found that such scaling factor is correlated with the normalization of depth ground truth (i.e. normalized from 1 to 3 or from 1 to 10 meters) when I train a model. The factor is also increased with the enhancement of training epoch.