Baking Gaussian Splatting into Diffusion Denoiser
for Fast and Scalable Single-stage Image-to-3D Generation
This is an implementation of our work "Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation ". Our DiffusionGS is single-stage and does not rely on a 2D multi-view diffusion model. DiffusionGS can be applied to 3D object and scene generation from a single view in ~6 seconds. If you find our repo useful, please give it a star ⭐ and consider citing our paper. Thank you :)
- 2024.11.22 : Our project page has been built up. Feel free to check the video and interactive generation results on the project page.
- 2024.11.21 : We upload the prompt image and our generation results to our hugging face dataset. Feel free to download and make a comparison with your method. 🤗
- 2024.11.20 : Our paper is on arxiv now. 🚀
@article{cai2024baking,
title={Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation},
author={Yuanhao Cai and He Zhang and Kai Zhang and Yixun Liang and Mengwei Ren and Fujun Luan and Qing Liu and Soo Ye Kim and Jianming Zhang and Zhifei Zhang and Yuqian Zhou and Zhe Lin and Alan Yuille},
journal={arXiv preprint arXiv:2411.14384},
year={2024}
}