ClementPinard/SfmLearner-Pytorch

Inverse warp with scaled depth

C2H5OHlife opened this issue · 3 comments

Since the depth output of this model is scaled by a factor according to groud truth,why this code manages to inverse warp correctly?Does that mean we use a wrong depth map for warping?

the inverse warp is both dependant on depth and pose. While ground truth depth and pose will generate the correct warping, every variation in the same form but with depth and pose translation multiplied by a scale factor will also generate the correct warping.

Here, since we learn both by inverse warping, we end up with that scale factor, which can then be determined with pose comparison with groundtruth, i.e. vehicle speed.

That’s quite clear thank you:) So I wonder if it is possible to produce true depth since KITTI dataset provides calibration ? For example use depth = focal length * baseline / disparity instead of depth = 1 / disparity?

The baseline is never used here, since we only work with monocular cameras. The key is to know the ground truth displacement magnitude. In the case of learning inverse warping from stereo, displacement is indeed only the baseline which makes the depth very easy to learn with the right scale factor.

Here, since we compute the inverse warp according to the actual displacement of the car, you need to 1) figure out the displacement magnitude of the car. From GPS values or even the wheels speed, it's not very hard fgure out, with the frames timings.
2) compare it to the translation magnitude estimation from posenet and figure out the scale factor so that the depth is rescaled accordingly

This is in essence what's done in the test_disp script that I provide. The obvious drawback is that you need to know speed during training. Or you will have to run posenet during evaluation.