Fictionarry/DNGaussian

Questions about freezing scaling and rotation for depth normalization and further freezing center for soft depth

Closed this issue · 2 comments

Hello, thanks for sharing your excellent work. I have two questions.

  1. In the paper, you mentioned during depth regularization, we should freeze the scaling $s$ and rotation $q$ and further freeze center $\mu$ for soft depth normalization. But I don't understand how you freeze these parameters in code i.e. by setting the corresponding learning rate to 0. Could you please point out the corresponding code for me?

  2. Another question is the following loss terms seem not mentioned in the paper. Could you please explain what their functionality is?

train_dtu.py
# Reg
loss_reg = torch.tensor(0., device=loss.device)
shape_pena = (gaussians.get_scaling.max(dim=1).values / gaussians.get_scaling.min(dim=1).values).mean()
# scale_pena = (gaussians.get_scaling.max(dim=1).values).std()
scale_pena = ((gaussians.get_scaling.max(dim=1, keepdim=True).values)**2).mean()
opa_pena = 1 - (opacity[opacity > 0.2]**2).mean() + ((1 - opacity[opacity < 0.2])**2).mean()

Thanks in advance.

Hi, thanks for your interests

  1. For convenience, we detach the corresponding attributes in the render functions https://github.com/Fictionarry/DNGaussian/blob/main/gaussian_renderer/__init__.py
  2. These are the regularization terms to limit the Gaussians become too big and too thin, and encourage them to be either solid or transparent. We regard them as some common and trivial details and did not mention them in the paper. The 3DGS† in Table 1 has included these regularizers.

Thanks for your quick and detailed explanation.