Questions about how to reproduce Nerf result in stanford dataset?
Closed this issue · 6 comments
Hi , thanks for your great work , I have some questions about how to reproduce the baselinee nerf result in stanfor dataset.
I followed your code in in baselines/nerf_tensorflow and only modify the config file ''stanford_config.txt'' with facotr=1 , but result seems not very good .
First it only get 19.2 PSNR when train for 200k iterations , when training more iterations , it improved very little which is still much lower than the result in your paper.
And how much iterations should I run. The total iterations in your code is 1000, 000 and it runs very slow.
I'm wondering how I can get the baseline result, it will be very grateful.
Hi! A few questions:
- Are you using our stanford_half data provided here? What scene are you testing on?
- Did you make any changes to the nerf_tensorflow codebase?
- Did you change any other configuration options apart from factor=1 ?
- I'm using stanford_half you provided. And I have tested on chess ,tarot_small and tarot_large scenes , while the result on chess scene seems same with your paper, the PSNR results on tarot_small and tarot_large is much lower than the result in the paer. I only put those images in an new folder name images so that nerf_tensorflow can work.
- When I change configuration with factor=1 , I have to modify some code to make it run, I add two lines of code below here
width = int(np.round(sh[1] / factor))
height = int(np.round(sh[0] / factor))
- No other configurations was changed
- Besides , nerf_tensorflow converges very fast on chess scene ,it get 41 PSNR after 200k iterations , while it converges extremely slow on tarot_small/large scene , which only get 19.5 PSNR after running 200k iterations.
That's quite odd. Both the tarot_small and tarot_large models exceed 20dB training PSNR fairly quickly for me, and after training for longer I am able to reproduce the paper results. Here are examples of predicted validation images after about ~30k iters for both tarot and tarot_small.
What command are you using to run NeRF? Are you changing the checkpointing / log directory each time you run an experiment? If not, you may be accidentally loading the weights and optimizer state from a previous scene (which could negatively impact performance).
For example, I am running:
python run_nerf.py --config configs/stanford_config.txt --datadir ~/data/stanford_half/tarot --expname nerf_tarot
Changing the --expname
argument will change the checkpointing / log directory.
Did you manage to resolve the issue?
Yes . When I delete the old experiment directory and train from scratch , it sucessfully reproduce the near result with yours.
Maybe I forget to change the exp_name at first and it loads the false weights, and the false initialize weights harms a lot to the training.
Okay great! I'll add some more explicit instructions about how to run this baseline in the README.