Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai
[2024.02.27] Accepted to CVPR 2024.
[2024.01.22] We add the appearance embedding to Scaffold-GS to handle wild scenes.
[2024.01.22] ππ The viewer for Scaffold-GS is available now.
[2023.12.10] We release the code.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum.
Our method performs superior on scenes with challenging observing views. e.g. transparency, specularity, reflection, texture-less regions and fine-scale details.
We tested on a server configured with Ubuntu 18.04, cuda 11.6 and gcc 9.4.0. Other similar configurations should also work, but we have not verified each one individually.
- Clone this repo:
git clone https://github.com/city-super/Scaffold-GS.git --recursive
cd Scaffold-GS
- Install dependencies
SET DISTUTILS_USE_SDK=1 # Windows only
conda env create --file environment.yml
conda activate scaffold_gs
First, create a data/
folder inside the project path by
mkdir data
The data structure will be organised as follows:
data/
βββ dataset_name
β βββ scene1/
β β βββ images
β β β βββ IMG_0.jpg
β β β βββ IMG_1.jpg
β β β βββ ...
β β βββ sparse/
β β βββ0/
β βββ scene2/
β β βββ images
β β β βββ IMG_0.jpg
β β β βββ IMG_1.jpg
β β β βββ ...
β β βββ sparse/
β β βββ0/
...
The BungeeNeRF dataset is available in Google Drive/ηΎεΊ¦η½η[ζεη :4whv]. The MipNeRF360 scenes are provided by the paper author here. And we test on scenes bicycle, bonsai, counter, garden, kitchen, room, stump
. The SfM data sets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the data/
folder.
For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/
folder.
To train multiple scenes in parallel, we provide batch training scripts:
- Tanks&Temples:
train_tnt.sh
- MipNeRF360:
train_mip360.sh
- BungeeNeRF:
train_bungee.sh
- Deep Blending:
train_db.sh
- Nerf Synthetic: base ->
train_nerfsynthetic.sh
; with warmup->train_nerfsynthetic_withwarmup.sh
run them with
bash train_xxx.sh
Notice 1: Make sure you have enough GPU cards and memories to run these scenes at the same time.
Notice 2: Each process occupies many cpu cores, which may slow down the training process. Set
torch.set_num_threads(32)
accordingly in thetrain.py
to alleviate it.
For training a single scene, modify the path and configurations in single_train.sh
accordingly and run it:
bash ./single_train.sh
- scene: scene name with a format of
dataset_name/scene_name/
orscene_name/
; - exp_name: user-defined experiment name;
- gpu: specify the GPU id to run the code. '-1' denotes using the most idle GPU.
- voxel_size: size for voxelizing the SfM points, smaller value denotes finer structure and higher overhead, '0' means using the median of each point's 1-NN distance as the voxel size.
- update_init_factor: initial resolution for growing new anchors. A larger one will start placing new anchor in a coarser resolution.
For these public datasets, the configurations of 'voxel_size' and 'update_init_factor' can refer to the above batch training script.
This script will store the log (with running-time code) into outputs/dataset_name/scene_name/exp_name/cur_time
automatically.
We've integrated the rendering and metrics calculation process into the training code. So, when completing training, the rendering results
, fps
and quality metrics
will be printed automatically. And the rendering results will be save in the log dir. Mind that the fps
is roughly estimated by
torch.cuda.synchronize();t_start=time.time()
rendering...
torch.cuda.synchronize();t_end=time.time()
which may differ somewhat from the original 3D-GS, but it does not affect the analysis.
Meanwhile, we keep the manual rendering function with a similar usage of the counterpart in 3D-GS, one can run it by
python render.py -m <path to trained model> # Generate renderings
python metrics.py -m <path to trained model> # Compute error metrics on renderings
The viewer for Scaffold-GS is available now.
- Tao Lu: taolu@smail.nju.edu.cn
- Mulin Yu: yumulin@pjlab.org.cn
If you find our work helpful, please consider citing:
@misc{scaffoldgs,
title={Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering},
author={Tao Lu and Mulin Yu and Linning Xu and Yuanbo Xiangli and Limin Wang and Dahua Lin and Bo Dai},
year={2023},
eprint={2312.00109},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Please follow the LICENSE of 3D-GS.
We thank all authors from 3D-GS for presenting such an excellent work.