The training results on the sample dataset did not meet the expectations
Opened this issue · 13 comments
I have downloaded the sample data you provided and followed your training command to complete 300,000 training iterations. However, both the geometry and appearance results are not very good and do not meet my expectations.
In the camera view of your SIBR_viewers:
PLY in SuperSplat is even worse:
Additionally, I noticed that the training parameters like scaling learning rate and position learning rate mentioned in your paper are inconsistent with the training command in the repository. Could you please help me identify which step I might have done incorrectly?
By the way, how are your depth maps generated? Theoretically, lidar data is sparser than image data. How do you achieve such smooth depth maps?
Try command:
python train.py -s
--use_lod
--sh_degree 2
--depths depths \ # use for depth loss if contains depths folder.
--densification_interval 10000
--iterations 300000
--scaling_lr 0.0015
--position_lr_init 0.000016
--opacity_reset_interval 300000 \
--densify_until_iter 200000
--data_device cpu
-r 1
@ShenjwSIPSD @zhaofuq
Hi, I also tried the sample data, and get poor result too.
Specially, mixed level is poor than level 0, level 1 and some others.
In my result, level 1 performs best, while still not meet the result in demo.
@ShenjwSIPSD @zhaofuq Hi, I also tried the sample data, and get poor result too. Specially, mixed level is poor than level 0, level 1 and some others. In my result, level 1 performs best, while still not meet the result in demo.
Level 1 can't perform the best. Level 1 is second sparse level. Can you post the images?
There might be some bug in the main branch. Try depth branch and this command:
python train.py -s
--sh_degree 2
--depths depths
--densification_interval 100
--iterations 90000
--scaling_lr 0.0015
--position_lr_init 0.000016
--opacity_reset_interval 300000
--densify_until_iter 75000
--data_device cpu
-r 1
Besides, when using LOD_viewer, try add --dmax 200 after -m
a small diffs between linux and windows, better use windows, and the result did not meet the expectations of paper.
a small diffs between linux and windows, better use windows, and the result did not meet the expectations of paper.
Try depth branch
hi, the submodules/diff-gaussian-rasterization/third_party/glm may be missing in the depth branch. Just a reminder for others, it can be copied from the main branch
There might be some bug in the main branch. Try depth branch and this command: python train.py -s --sh_degree 2 --depths depths --densification_interval 100 --iterations 90000 --scaling_lr 0.0015 --position_lr_init 0.000016 --opacity_reset_interval 300000 --densify_until_iter 75000 --data_device cpu -r 1 Besides, when using LOD_viewer, try add --dmax 200 after -m
Hi, thanks for your guidance. I notice that when --densification_interval
set to 100
, the training process may slow down rapidly, and the expect training time will jump to hundreds of hours. I wonder know if it is normal situation, and how to further increase the training speed. Thanks a lot.
There might be some bug in the main branch. Try depth branch and this command: python train.py -s --sh_degree 2 --depths depths --densification_interval 100 --iterations 90000 --scaling_lr 0.0015 --position_lr_init 0.000016 --opacity_reset_interval 300000 --densify_until_iter 75000 --data_device cpu -r 1 Besides, when using LOD_viewer, try add --dmax 200 after -m
Hi, thanks for your guidance. I notice that when
--densification_interval
set to100
, the training process may slow down rapidly, and the expect training time will jump to hundreds of hours. I wonder know if it is normal situation, and how to further increase the training speed. Thanks a lot.
thank you for your feedback. We will make some more tests and find out the reason.
depth
Hi, great work. I also tried the sample data, and I use depth branch also get poor result too. I using LOD_viewer, add --dmax 200. Is there anything I do wrong? my command and result are list below.
python train.py -s ../data --sh_degree 2 --depths depths --densification_interval 400 --iterations 9000 0 --scaling_lr 0.0015 --position_lr_init 0.000016 --opacity_reset_interval 300000 --densify_until_iter 75000 --data_device cpu -r 1