kwea123/nerf_pl

Does positive loss/b_l necessary for validity of training nerf-w?

LongruiDong opened this issue · 1 comments

I am training nerf-w on replica data which contains hardly any transient objects.
The total loss has become negative during first epoch, cause the beta loss b_l achieve to around -0.4 quickly.

I notice that you want to make beta to be positive here.
I think it might reasonable that beta is quite small as the data is almost static.
From your experience, do you think negative beta values and negative loss values will affect the model training effect?

No, positive or not has no difference. I add some constant to make it positive only in order to use the "log scale" of tensorboard to visualize it better