yashbhalgat/HashNeRF-pytorch

Question about hash encoding speed

Miles629 opened this issue · 1 comments

Hello, thank you for your code. I have some questions regarding hash encoding. As I understand it, I only need to replace the positional encoding and large MLP part of my model with hash encoding and a small MLP to achieve faster training. However, in practice, in my model, it takes 4 seconds to iterate 20 iterations with positional encoding and large MLP, while it takes 9 seconds with hash encoding and small MLP.
Does this mean that using hash encoding and small MLP actually requires longer training time for each iteration? And why can hash encoding accelerate the training of nerf?

Hi, yes, your understanding is correct.

  1. The "iterations per second" speed of HashNeRF-pytorch will NOT be faster than nerf-pytorch. The reason it takes 9 seconds (compared to 4 seconds) maybe because of the additional losses. You can comment out the computation of the TV_loss and see if it increases speed.
  2. Overall, the convergence of HashNeRF is much faster (e.g. 30 minutes compared to 20 hours with vanilla NeRF). To understand why, you would need to read the Instant-NGP paper.

If you have any more questions, feel free to reopen the issue.