luciddreamer-cvlab/LucidDreamer

Dark Rendered Videos and Noisy Gaussian Splatting

Closed this issue · 5 comments

Hello,

I'm experiencing an issue where the videos rendered from the project are turning out predominantly dark and display a noisy Gaussian splatting effect. This problem persists across various examples, regardless of whether I include a negative prompt. Below are the details of my setup and the steps I've followed:

Commit Hash: 76ed990
Environment:
Framework: PyTorch 2.0.0
CUDA Version: 12.1
Python Version: 3.10
Execution Environment: I've tried both an NGC Docker container and a Conda environment with the same results.

python run.py --image examples/Image015_animelakehouse.jpg --text examples/Image015_animelakehouse.txt --neg_text examples/Image015_animelakehouse_negative.txt

Issue Description: The output videos are significantly dark, making it difficult to discern details. Additionally, the Gaussian splatting representation in the gsplat file is noticeably noisy, suggesting that the issue might not be related to rendering alone.
Attempts to Resolve:
I haven't modified the code from the specified commit.
I've tested multiple examples with and without the negative text prompt, all yielding similar results.
Supporting Material: I have uploaded examples of the output and the gsplat files here for reference: https://drive.google.com/drive/folders/17ayNVMTilR_e16F3qHv8PVwgTSx_lhxL?usp=drive_link

I'm unsure if this is an environmental issue, a problem with the commit I'm using, or something else entirely. Any insights or suggestions for troubleshooting this problem would be greatly appreciated. Demo website gets errors

Thank you.

After debugging, I'm confident it's not an environment issue. I'm including images that show the recorded 3D point cloud, the Gaussian splat cloud, and the rendered frames used for loss calculation.

The generate_pcd function seems to work fine:
image
As expected, after around 13 steps with the camera matrices, the scene starts to extrapolate.

The alignment process gets about 70 frame images along with their camera data. Some frames contain unusual noise, but overall it seems okay.
Example:
imagej_idx19
imagej_idx20

The issue appears to be with the Gaussian splatting training. Looking at the loss inputs, the rendered image quality is very poor.
Here are some examples of rendered images from the first 20 iterations:
For example: [few rendered images, in the first ~20 iterations]
iter1
iter17
iter24

There's definitely something wrong with the rendering process. Additionally, here's what the Gaussian training process looks like:
init_step0
step1000
step2000

If anyone has encountered a similar issue, I'd appreciate hearing about your experience. If you have any insights or suggestions, I'd be grateful for your input.

UPDATE: It appears there is an issue with the rasterizer method in the depth_diff_gaussian_rasterization_min package. The issue is resolved when you set the convert_SHs_python parameter to True in arguments.py.

UPDATE: It appears there is an issue with the rasterizer method in the package. The issue is resolved when you set the parameter to in arguments.py.depth_diff_gaussian_rasterization_min``convert_SHs_python``True

I tried changing convert_SHs_python in argumens.py to True, but it still doesn't seem to work!

Sorry I'm new to programming, can you tell me exactly what to change?

@onlyjokers well, this should have fixed the problem. It might be a different problem.

Hello, have you solved this?