kakaobrain/nerf-factory

RefNerf Predicted Normals

gafniguy opened this issue · 2 comments

Hi,

Thanks a lot for this repo.

To the best of my knowledge you never actually visualize the normals, so I added some lines basically saying the normals are the weighted (by rendering weights) sum of the per-ray-sample normals, then normalized again.

       normals_deriv = (rendered_results[1]["weights"][..., None] * rendered_results[1]["normals"]).sum(1)
        normals_deriv = ref_utils.l2_normalize(normals_deriv)
        normals_pred = (rendered_results[1]["weights"][..., None] * rendered_results[1]["normals_pred"]).sum(1)
        normals_pred = ref_utils.l2_normalize(normals_pred)

What I get makes sense for the derived normals, but not for the predicted ones. They seem to be collapsing, such that the shading part of the MLP just doesn't use them and only the view direction?

Thanks in advance for any tips
image

@seungjooshin Could you check for this issue?

Thank you for mentioning the predicted normal issue. As you mentioned, the predicted normals of the lego scene do not used in MLP and this is why there is no performance difference between mip-nerf and ref-nerf.

In ref-nerf, the predicted normals are used for solving the noisy issue of density-based normals (derived normals). Thus, I have checked the results of other scenes in the blender dataset and shiny blender dataset as below (Left: density-based, Right: predicted).

As a result, the normals are predicted well in other scenes, constrast to lego scene. Therefore it is more reasonable to uses predicted normals, which are smoother, than density-based ones. For exceptional case of lego scene, replacing normals_to_use=normals can be a solution.

normals_to_use = normals_pred

refnerf_normal