maincold2/Compact-3DGS

About RVQ loss and learnable codebook

Closed this issue · 2 comments

LRLVEC commented

The paper shows the loss of training the codebook, however, the code shows that the residual vector quantization codebook is not learnable and there is also no commitment_weight or orthogonal_loss for the RVQ. The print result of the vqloss in training steps with rvq is also zero.

Is there any explanation for this? Or the RVQ already works very well without learnable codebook?

As mentioned in the paper, we learn codebooks during the last 1K iterations.
Therefore, vq losses at early iterations are set to zero.
This strategy retains fast training while showing good performance.
In addition, as we aim to learn codebook to represent attributes, we didn’t use commitment loss.

Thank you for your interest in our work.

LRLVEC commented

Thanks for your detailed reply!