panmari/stanford-shapenet-renderer

A slight bias of output depth maps

Opened this issue · 3 comments

Hello, I met a problem that the output depth map seems a slight offset from the ground truth depth.

One example is that I rendered 90 views around a ShapeNet model and conducted tsdf fusion using the output depth map in EXR format. The yellow image below shows the fusion result, and the gray image shows the GT mesh. The two shape looks totally same, but when overlapping them, the fusion result is significantly thicker than the GT a little.

image

I found this phenomenon because I am tring to train a module of my network, supervised by very accurate depth maps. I just found a small bias in the results of my trained network, and ultimately found that it was caused by inaccurate depth maps. It did bother me for a while. Do you have any opinion on this?

PS. The output depth map is in 16 bit, OPEN_EXR format.

mvoelk commented

I also once had such suspicions with blender depth renderings, but then did not investigate it further. Therefore I am very interested in an explanation of the phenomenon.

I am not really familiar with TSDF Fusion. Do you have to provide both the intrinsic and extrinsic camera parameters to the algorithm?

I later found there is no problem in the rendering script. The problem is that tsdf fuison results in the reconstructed mesh a little thicker. However, the rendered depth map is accurate without doubt.

maybe try changing engine to cycles? I found that eevee makes some rounding on depthmaps