limacv/Deblur-NeRF

How to set N_rand value to get the same output as shown in the repository?

Closed this issue · 3 comments

Hi @limacv,

This is amazing work.

I am using Nvidia RTX 3060 GPU (Memory=VRAM 12 GB) to train the algorithm.

After 1200 iterations, the code throws this error
Screenshot 2022-07-01 12:10:09

I was wondering how to set the N_rand value to get the output?

Thanks
Adwait

I have one more query, is it possible to use image metrics like Features Similarity Index Matrix (FSIM) along with SSIM for this code?

Hi, I'm using 1 V100 (32GB) with N_rand = 1024. The memory required is roughly proportional to N_rand. So I guess you can first try N_rand = 1024 * (12 / 32) ~= 350. For other image metrics, just implement your own metrics in the test stage.

Thanks for the response, @limacv.

I was able to successfully (albeit with a smaller N_rand, chunk, and netchunk) replicate the results of your algorithm on my GPU.

This is the command I used.

python3 run_nerf.py --config configs/demo_blurball.txt --N_rand 128 --chunk 384 --netchunk 768
  1. Results after 20,000 iterations
blurball_full_spiral_020000_rgb.mp4
  1. Results after 40,000 iterations
blurball_full_spiral_040000_rgb.mp4
  1. Results after 60,000 iterations (final)
blurball_full_spiral_060000_rgb.mp4

Thanks for your help.