/Open-DiffusionGS

Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation

 

Baking Gaussian Splatting into Diffusion Denoiser
for Fast and Scalable Single-stage Image-to-3D Generation

arXiv project hf MrNeRF

abo gso real_img wild

sd_2 sd_1 flux_1 green_man

plaza town

cliff art_gallery

 

Introduction

This is an implementation of our work "Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation ". Our DiffusionGS is single-stage and does not rely on a 2D multi-view diffusion model. DiffusionGS can be applied to 3D object and scene generation from a single view in ~6 seconds. If you find our repo useful, please give it a star ⭐ and consider citing our paper. Thank you :)

pipeline

News

  • 2024.11.22 : Our project page has been built up. Feel free to check the video and interactive generation results on the project page.
  • 2024.11.21 : We upload the prompt image and our generation results to our hugging face dataset. Feel free to download and make a comparison with your method. 🤗
  • 2024.11.20 : Our paper is on arxiv now. 🚀

Comparison with State-of-the-Art Methods

Qualitative Comparison

visual_results

Quantitative Comparison

results1

 

Citation

@article{cai2024baking,
  title={Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation},
  author={Yuanhao Cai and He Zhang and Kai Zhang and Yixun Liang and Mengwei Ren and Fujun Luan and Qing Liu and Soo Ye Kim and Jianming Zhang and Zhifei Zhang and Yuqian Zhou and Zhe Lin and Alan Yuille},
  journal={arXiv preprint arXiv:2411.14384},
  year={2024}
}