Why is the depth rendering by pytorch3d different from blender?
yejr0229 opened this issue · 1 comments
yejr0229 commented
Hi, here is two images rendering by pytorch3d and blender, and the third is the difference between them:
I'd like to how can I get a result more close to blender? And here is my code to render the depth:
def get_relative_depth_map(fragments, pad_value=pad_value):
absolute_depth = fragments.zbuf[..., 0] # B, H, W
no_depth = -1
depth_min, depth_max = absolute_depth[absolute_depth != no_depth].min(), absolute_depth[absolute_depth != no_depth].max()
target_min, target_max = 50, 255
depth_value = absolute_depth[absolute_depth != no_depth]
depth_value = depth_max - depth_value # reverse values
depth_value /= (depth_max - depth_min)
depth_value = depth_value * (target_max - target_min) + target_min
relative_depth = absolute_depth.clone()
relative_depth[absolute_depth != no_depth] = depth_value
relative_depth[absolute_depth == no_depth] = pad_value # not completely black
return relative_depth
depth_maps = get_relative_depth_map(fragments)
bottler commented
I have no idea. It looks a bit like the discrepancy increases as you move away from a certain point. Perhaps it is something like one of these is a Euclidean distance to the camera and one is a distance from the camera plane? Maybe you can manually calculate what do you think the distances should be to some special points?