philgras/neural-head-avatars

Question about using a custom rendering pipeline instead of PyTorch3D's.

Closed this issue · 2 comments

Hi @philgras and @malteprinzler, thanks for such brilliant work! It's quite inspiring that NHA provides hybrid representations for jointly learning an inverse rendering of geometry and texture.
In the code base, naive PyTorch3D renderer/shader is not utilized. Instead, there is a custom forward step of rasterization without gradient tracking, along with a custom screen_grad op that backpropagates the gradients. I can hardly grasp the intention.
Are there any drawbacks to using Pytorch3D renderers? Could you please provide any explanations or references for the render pipeline design?
Thanks in advance!

Hi,
Thanks for your interest. The motivation for this kind of rasterization comes from the original 3DMM paper "A Morphable Model For The Synthesis Of 3D Faces" and face2face. We experimented with soft rasterization but it had the drawback that for each pixel there are multiple faces being aggregated and for each face the texture mlp has to be executed. This increased the resource consumption significantly which is why we favored the screen gradient based differentiable rasterizer.

Hope this helps!

Thank you so much for the informative reply!