Configurations used in your paper
MingyuKim87 opened this issue · 4 comments
Thanks for sharing your work in public.
Regarding to the configuration attached in this GitHub, I thought this version might be slightly different than what you used for representing numerical performance.
When I computed chamfer distance for the dtu_scan106, I obtained 0.57 with the official configuration.
(In this paper, you achieved 0.51 chamfer distance in the dtu_scan106)
Also, it took at least 24 hours for training. My computation resource is RTX A6000, EPIC 24-core CPUs and 256 GB Memory.
(In this setup, I followed the configuration in the official GitHub.)
This configuration sets as 100,000 training steps by default.
However, in accordance with your paper, you took 9 hours for training DTU dataset via the V100 resource.
I think there exists blurry explanation of your configuration for achieving numerical results you reported.
For this reason, could you share all configurations you used in this paper?
At least, I'd like to have the configuration for all scenes in the DTU dataset.
Best regards,
-Mingyu Kim-
Have you successfully run the code? If yes, when batchsize is 1024, how much video memory is occupied?
Looking forward to your reply~
Have you successfully run the code? If yes, when batchsize is 1024, how much video memory is occupied? Looking forward to your reply~
I ran this code, but the batch size was set to 4096 (I'm not confident because it was away 6 months ago.)
Thank you so much for your quick reply~