splatfacto blurry results using blender-generated synthetic data
Opened this issue · 0 comments
Hi everyone,
- blender screenshot
I'm working on generating a Blender-Nerfstudio splatfacto/gsplat synthetic dataset (above figure). I used Blender to capture 1000 images with a virtual camera (using the same camera settings) and converted them into transforms.json (1000 cameras) and sparse_pc.ply (500021 points from 3D model) files.
I've verified that COLMAP GUI loads my dataset correctly, as shown in the figure above. Additionally, I believe Nerfstudio is loading the dataset correctly as well. In the figure below, you can see that the sparse points generated by my dataset are loading well, and as training progresses, the Gaussians are being generated close to these sparse points.
However, I'm concerned about the training results. I expected the final output to be indistinguishable from the original 3D model, but the results appear blurry. Even though the SSIM is high (above 0.9), it still looks blurry. Could anyone help me understand what might be causing this issue?
- left (GT), right (synthesized view)
Thank you in advance for your assistance!