Issue with result on torchocolate dataset
ShawnXu10 opened this issue · 9 comments
Hi,
I recently attempted to train the torchocolate dataset using the provided code, but encountered an unexpected challenge. The results I obtained were notably blurred, deviating from the result you provided. I'm reaching out to seek your insights or suggestions on what might be causing this issue.
Thank you for your time and assistance.
Best,
This does look a bit strange because the upper part is normal, but the lower part is blurry. Are you using the latest code? Could you provide your command? I have never encountered this situation in my experiments with this scenario.
I'm using the lastest commit '152ff1f' and run command 'python train.py -s data/Hyper-NeRF/torchocolate -m output/torchocolate --eval' then render the image by command 'python render.py -m output/torchocolate'
Also, the result for D-Nerf/bouncingballs has no issue.
Also, for the interpolate_view, it become total messed up.
video.mp4
Iterpolate hyper view seem to make sense, but still noisy.
video_interpolate_hyper_view.mp4
Also, for the interpolate_view, it become total messed up.
I think this is normal. I remember deleting this part in the code, and I apologize if it wasn't removed. The reason is that I didn't know the scale of the scene, which caused it to break apart when zooming in and out.
Iterpolate hyper view seem to make sense, but still noisy.
Since your rendering results have noises, so it's inevitable that there will be noise in interp_view as well. Your commands and code seem to be correct. I have set a seed, so in theory, the results should be very stable.
I suggest you run it again, and if a similar problem still occurs, I will check my code more carefully.
As shown in this issue, some people have already been able to successfully achieve stable results in this scene.
My entire experimental setup is actually without 6_dof. In my experiments, 6_dof only yields better results in the Blender dataset, but it decreases both rendering quality and speed in real datasets.
Regarding the quality issue with torchocolate, I think it's more of an environmental issue.
Another small possibility is not using torchocolate's point cloud for initialization. However, from my results, the difference between random initialization and colmap point cloud initialization on torchocolate is not significant.
The imprecise camera pose in the HyperNeRF dataset is also a potential factor causing instability. You could try the NeRF-DS dataset, which has much more accurate poses. If there still is a problem with this dataset, I will check my code.