Fictionarry/DNGaussian

error during training

Opened this issue · 5 comments

Traceback (most recent call last):
File "train_llff.py", line 400, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, args.near)
File "train_llff.py", line 95, in training
render_pkg = render_for_depth(viewpoint_cam, gaussians, pipe, background)
File "/media/public/disk5/gaohc/huanghy/DNGaussian-main/gaussian_renderer/init.py", line 178, in render_for_depth
rendered_image, radii, rendered_depth, rendered_alpha = rasterizer(
ValueError: not enough values to unpack (expected 4, got 2)

Why did this happen? I thoroughly follow your setting...

Hi, I guess you are using the official 3DGS rasterizer to run the code. Indeed our repo requires a modified version from https://github.com/ashawkey/diff-gaussian-rasterization, which has been included in ./submodules.

Thank you, I have solved this problem, but I meet another problem during rendering.
I got test psnr 23.00 and train psnr 28 by the end of training, but when compute metrics and render images, I got psnr 12, why did this happen?

This is abnormal. I haven't met such a problem in my test. The latest reported "eval" results while training should be close to the final results. Please provide more information about which script and which scene and the dataset are used, so that I can try to reproduce this problem.

And please check whether the images have been correctly rendered.

I have solve this problem by remove this strategy in train_llff.py

if (iteration - 1) % 25 == 0:

        #     viewpoint_sprical_cam = viewpoint_sprical_stack.pop(0)
        #     mask_near = None
        #     if iteration > 2000:
        #         for idx, view in enumerate(scene_sprical.getRenderCameras().copy()):
        #             mask_temp = (gaussians.get_xyz - view.camera_center.repeat(gaussians.get_xyz.shape[0], 1)).norm(dim=1, keepdim=True) < near_range
        #             mask_near = mask_near + mask_temp if mask_near is not None else mask_temp
        #         gaussians.prune_points(mask_near.squeeze())|

and remove

if near > 0:

#     mask_near = None
#     for idx, view in enumerate(tqdm(views, desc="Rendering progress", ascii=True, dynamic_ncols=True)):
#         mask_temp = (gaussians.get_xyz - view.camera_center.repeat(gaussians.get_xyz.shape[0], 1)).norm(dim=1, keepdim=True) < near
#         mask_near = mask_near + mask_temp if mask_near is not None else mask_temp
#     gaussians.prune_points_inference(mask_near)

in render.py, I am confused about this result

Seems the hyperparameters you used are not entirely the same as in our scripts. Ensure --near in rendering is the same as in training, otherwise some near contents may be excluded during rendering. It is used to limit the scene range, just like the near and far in NeRFs.