/GaussianRoom

GaussianRoom Repo

Primary LanguagePythonMIT LicenseMIT

GaussianRoom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction

Haodong Xiang*, Xinghui Li*, Kai Cheng*, Xiansong Lai, Wanting Zhang, Zhichao Liao, Long Zeng✉, Xueping Liu

📃 Abstract

Recently, 3D Gaussian Splatting(3DGS) has revolutionized neural rendering with its high-quality rendering and real-time speed. However, when it comes to indoor scenes with a significant number of textureless areas, 3DGS yields incomplete and noisy reconstruction results due to the poor initialization of the point cloud and under-constrained optimization. Inspired by the continuity of signed distance field (SDF), which naturally has advantages in modeling surfaces, we present a unified optimizing framework integrating neural SDF with 3DGS.This framework incorporates a learnable neural SDF field to guide the densification and pruning of Gaussians, enabling Gaussians to accurately model scenes even with poor initialized point clouds. At the same time, the geometry represented by Gaussians improves the efficiency of the SDF field by piloting its point sampling. Additionally, we regularize the optimization with normal and edge priors to eliminate geometry ambiguity in textureless areas and improve the details. Extensive experiments in ScanNet and ScanNet++ show that our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.

🧭 Overview

GaussianRoom integrates neural SDF within 3DGS and forms a positive cycle improving each other. (a) We employ the geometric information from the SDF to constrain the Gaussian primitives, ensuring their spatial distribution aligns with the scene surface. (b) We utilize rasterized depth from Gaussian to efficiently provide coarse geometry information, narrowing down the sampling range to accelerate the optimization of neural SDF. (c) We introduce monocular normal prior and edge prior, addressing the challenges of texture-less areas and fine structures indoors.

🌏 News

  • [ ✔️ ] 20241209 Code version0 release
  • [ ✔️ ] 20241209 Dataset sample

📌 Setup

Clone this repo

git clone https://github.com/xhd0612/GaussianRoom.git

Environment setup

It is recommended to manually install some modules here, especially the submodules required by 3DGS.

conda create -n GaussianRoom python=3.9
conda activate GaussianRoom 
pip install -r requirements.txt

📁 Data Preparation

Data structure

data/
├── intrinsic_depth.txt
├── scene_name-GS/
│   ├── images/
│   └── sparse/
├── scene_name-SDF/
│   ├── image/
│   ├── pred_edge/
│   ├── pred_normal/
│   ├── cameras_sphere.npz
│   ├── intrinsic_depth.txt
│   ├── scene1_vh_clean_2.ply
│   └── trans_n2w.txt
...

Data Preprocessing

The raw indoor scene dataset we used is sourced from ScanNet and ScanNet++. We also provide a processed dataset as a sample here.

Follow the data processing in NeuRIS for the SDF branch, and follow the data processing in 3DGS for the GS branch.

edge & normal

Please refer to pidinet for obtaining edge maps and surface_normal_uncertainty for obtaining normal maps. Alternatively, you may choose other similar methods to obtain these two monocular priors required for GaussianRoom.

We save the edge maps in .png format to data/scene_name-SDF/pred_edge , and the normal maps in .npz format to data/scene_name-SDF/pred_normal.

🎮 Run the codes

The required GPU memory for running the code might be quite large, so it's recommended to run it on a GPU with more than 32GB of VRAM.

Before running run.sh, you need to modify the file paths in it to match your local paths.

cd GaussianRoom
bash run.sh

📜 Acknowledgement

Thanks to excellent open-source projects like 3dgs, neus, neuris, and gaussianpro, the open-sourcing of this work is a small contribution back to the open-source community.

🖊 Citation

If our work is helpful to your research, please consider citing:

@article{xiang2024gaussianroom,
  title={GaussianRoom: Improving 3D Gaussian Splatting with SDF Guidance and Monocular Cues for Indoor Scene Reconstruction},
  author={Xiang, Haodong and Li, Xinghui and Lai, Xiansong and Zhang, Wanting and Liao, Zhichao and Cheng, Kai and Liu, Xueping},
  journal={arXiv preprint arXiv:2405.19671},
  year={2024}
}