training time and downsampled images?
hvkwak opened this issue · 1 comments
Hi, I was wondering if the training time of datasets shown in the paper is based on downsampled images, e.g. Mill 19 Building dataset with 4K images downsampled to 1K so that it then probably takes 29:49(h) per cluster? If it were to be 4Kx3K images without downsampling, one image may contain 12M rays. If we'd further assume we train one submodule(single NeRF) with 40 images, this would be about 500M rays, so just one epoch of training all 500M rays would use up all the "default" total iterations 512M (~ 500K iterations x batchsize of 1024). Will one epoch, in this case, be enough? I have a custom dataset, seems like it takes like 30 days to train one submodule with 4K images.
In the paper we trained on the full images and evaluated using images that were downsampled by a factor of 4 (the codebase should use those settings by default)