albertpumarola/D-NeRF

tv loss deviation from paper

JulianKnodt opened this issue · 2 comments

Hey, this is really cool work on dynamic modeling. I was trying to reproduce the paper and found that I got significantly worse results than your reported values. I was digging through your code because I didn't expect the L2 loss to accurately be able to capture the dynamics, and found tv loss. If I understand correctly, it's an L2 loss between time steps, but evaluated from the same camera position to ensure smoothness. I was wondering if the results in the paper use this loss?

I was also curious what tv stands for, as I'm not super familiar with this.

Thanks!

Hi @JulianKnodt, did you manage to reproduce the results? I also got much worse reconstructions.

@violetteshev I did not directly reproduce the results, I re-implemented this in my own repo without a coarse/fine approach. Instead, it might be worth to trying to run NR-NeRF on the dataset and see how it performs.