3DTopia/OpenLRM

Why encoder is not freeze in training config?

Opened this issue · 1 comments

Hi OpenLRM Team
I don't find encoder freeze in LRM paper, why do you choose to set freeze encoder is false?

Hi,

We are setting encoder freeze to be False by default, which means the encoder is trainable.

We tried both freezing encoder and trainable encoder at earlier experiments and found that a trainable encoder reaches better loss and relatively better performances.