[Paper] | [Project Page] | [3DGS Model]
This repository is an official implementation for:
SA-GS: Scale-Adaptive Gaussian Splatting for Training-Free Anti-Aliasing
Authors: Xiaowei Song*, Jv Zheng*, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, Hao Zhao
We introduce SA-GS, a training-free approach that can be directly applied to the inference process of any pretrained 3DGS model to resolve its visual artefacts at drastically changed rendering settings.
3DGS has gained attention in the industry due to its high-quality view rendering and fast speeds. However, view quality degradation can occur during rendering depending on settings such as resolution, distance, and focal length. Existing methods address this issue by adding regularity to Gaussian primitives in both 3D and 2D space during training. However, these methods overlook a significant drawback of 3DGS when used with different rendering settings: the scale ambiguity problem. This issue directly results in the inability of 3DGS to utilise conventional anti-aliasing techniques. We propose and analyse this problem for the first time and correct this shortcoming by using only 2D scale-adaptive filters. Based on this, we use conventional antialiasing methods such as integration and super-sampling to solve the aliasing effect caused by insufficient sampling frequency. It is worth noting that our method is the first Gaussian anti-aliasing technique that does not require training. Therefore, it can be directly integrated into existing 3DGS models to enhance their anti-aliasing capabilities. The method was validated in both bounded and unbounded scenarios, and the experimental results demonstrate that it achieves robust anti-aliasing performance enhancement in the most efficient way, surpassing or equaling the current optimal settings.
cd SA-GS
conda create -n SA-GS python=3.9
conda activate SA-GS
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
pip install submodules/simple-knn/
pip install submodules/diff-gaussian-rasterization_new/
Please download and unzip nerf_synthetic.zip from the NeRF's official Google Drive. Then generate multi-scale blender dataset with
python convert_blender_data.py --blender_dir nerf_synthetic/ --out_dir multi-scale
Please download the data from the Mip-NeRF 360 and request the authors for the treehill and flowers scenes.
Please download and unzip models.zip from the Google Drive. Eventually, model folder should look like this:
<your/model/path>
|-- point_cloud
|-- iteration_xxxx
|-- point_cloud.ply
|-- cameras.json
|-- cfg_args
Our code integrates the training process of the vinilla 3DGS, which can be trained using the following code. Of course, you can also use a pre-trained 3DGS model, e.g. downloaded from here, or a model that you have trained separately (satisfying the model catalogue specification above).
# single-scale training on NeRF-Synthetic dataset
python train.py -s /your/dataset/scene/path -m /your/output/path --save_iterations 30000 -r 1
# multi-scale training on NeRF-Synthetic dataset
python train.py -s /your/dataset/scene/path -m /your/output/path --save_iterations 30000 --load_allres
# single-scale training on Mip-NeRF 360 dataset
python train.py -s /your/dataset/scene/path -m /your/output/path --save_iterations 30000 -r 1
Render using our method. There are four modes to choose from: source-GS, only-filter, integration and super-sampling:
# Multi-scale testing on NeRF-synthetic dataset
python render_blender.py -s /your/data/path -m /your/model/path --save_name OUTPUT --load_allres --mode integration --resolution_train 1 --eval
# Single-scale testing on NeRF-synthetic dataset
# -r "your render resolution" --resolution_train "your train resolution"
python render_blender.py -s /your/data/path -m /your/model/path --save_name OUTPUT -r 8 --mode integration --resolution_train 1 --eval
# Single-scale testing on Mip-NeRF 360 dataset
python render_360.py -s /your/data/path -m /your/model/path --save_name OUTPUT -r 8 --mode integration --resolution_train 1
We support user-defined camera tracks and camera parameters for scene renderingļ¼
python render_custom.py -s /your/data/path -m /your/model/path --save_name OUTPUT --mode integration
If you have used our work in your research, please consider citing our paper. This will be very helpful to us in conducting follow-up research and tracking the impact of this work.
@article{song2024sa,
title={SA-GS: Scale-Adaptive Gaussian Splatting for Training-Free Anti-Aliasing},
author={Song, Xiaowei and Zheng, Jv and Yuan, Shiran and Gao, Huan-ang and Zhao, Jingwei and He, Xiang and Gu, Weihao and Zhao, Hao},
journal={arXiv preprint arXiv:2403.19615},
year={2024}
}
This project is built upon 3DGS and Mip-splatting. Please follow the license of 3DGS and Mip-splatting. We thank all the authors for their great work and repos.