This is a collection of documents and topics NeRF/3DGS & Beyond channel accumulated, as well as papers in literaure. Since there are lots of papers out there, so we split them into two seperate repositories: NeRF and Beyond Docs and 3DGS and Beyond Docs. Please choose accordingly recarding to your preference.
Some papers we discussed in the group, will be added to the back of the paper with a Notes link. You can follow the link to check whether there is topic you are interested in. If not, welcome to join us and ask the question to the crowd. The mighty community might have your answers.
We are actively maintaining this page trying to stay up-to-date and gather important works in a daily basis. We would also like to put as many notes as possible to some works, trying to make it easier to catch up.
Please feel free to join us on WeChat group or start a discussion topic here.
I have recently published a book with PHEI(Publishing House of Electronics Industry) on NeRF/3DGS. This would not have been possible without the help of the whole 3D vision community. It is now available on jd.com (Checkout here) and it should be suitable as a reference handbook for NeRF/3DGS beginners or engineers in related areas. I sincerely hope the book can be helpful in any perspective.
For those of you who have already purchased the book, all references can be downloaded HERE. If you experience any issue reading the book or have any suggestions to improve it, please contact me through my email address: jiheng.yang@gmail.com
, or directly concact me on WeChat: jiheng_yang
. I'm looking forward to talk to anyone reaching out to me, thanks in advance.
For now, you can join us in the following ways
- Bilibili Channel where we post near daily updates (primarily) on NeRF.
- WeChat group, due to the limitation of WeChat group, you can add my personal account:
jiheng_yang
, and I will add you to the chat groups. - If you want to view this from a timeline perspective, please refer to this ProcessOn Diagram
- If you think something is not correct or you think we could do better in some way, please write to us through all possible channels or drop an issue. All suggestions are appreciated!
- For other discussed techniques that's related to 3D reconstruction and NeRF, please refer to link, we are constantly trying to add more resource to this document.
For NeRF related progress, you can refer to NeRF and Beyond Docs
3DGS and Beyond Docs
- NeRF/3DGS Book
- How to join us
- NeRF Progresses
- 3DGS Original Paper
- 3DGS Surveys
- 3DGS Frameworks
- 3DGS Profiling
- 3DGS Distributed Training
- 3DGS Quality Enhancement
- 3DGS with Lower Memory Footprint
- 3DGS with Ray Tracing
- 3DGS Acceleration
- 3DGS Geometry Reconstruction
- 3DGS+Mesh For Reconstruction
- 3DGS Based Dynamic Scene
- 3DGS + Depth
- 3DGS Based Depth Estimation
- 3DGS Few-shot Reconstruction
- 3DGS Weak Camera Pose
- 3DGS Object Pose Estimation/Tracking/Detection
- 3DGS-NeRF Transfer
- 3DGS Generalization
- Generalizable 3DGS with Feed-forward Networks
- 3DGS Indoor Scene Reconstruction
- 3DGS Based Wild Scene Reconstruction
- 3DGS Based Large Scene Reconstruction
- 3DGS Autonomous Driving
- 3DGS Based Occupancy Prediction
- 3DGS Based on Diffusion
- 3DGS Based AIGC
- 3DGS Model Compression
- 3DGS Streaming
- 3DGS Based Relighting
- 3DGS Robotics
- 3DGS Avatar Generation
- 3DGS Clothes/Garment
- 3DGS Scene Editing and Animation
- 3DGS Stylization
- 3DGS Based Video Editing
- 3DGS for Computer Graphics
- 3DGS Based Scene Understanding
- 3DGS based Segmentation
- 3DGS + Specular
- 3DGS Based SLAM
- 3DGS Based 3D Point Tracking
- 3DGS Based Inverse Rendering
- 3DGS Imaging Tasks
- 3DGS for Reflective and Transparent Objects
- 3DGS Superresolution
- 3DGS with/for Point Cloud
- 3DGS for CV Tasks
- 3DGS with Hardware
- 3DGS Applications
- 3DGS Artifact Detection
- 3DGS Copyright/Safety
- 3DGS Applications in UAV/MAV
- 3DGS Applications in Satellite Images
- 3DGS Network Applications
- 3DGS for Acoustic
- 3DGS with Panorama
- 3DGS with Thermal
- 3DGS with Fisheye Camera
- 3DGS with Compressive Sensing
- Other NVS Methods
- Other Upstream work(Occasionally Came Across)
- Other Surveys
- Contributors
- License
🔥3D Gaussian Splatting for Real-Time Radiance Field Rendering
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis
ACM ToG 2023, 8 August, 2023
Abstract
The emergence of 3D Gaussian Splatting (3DGS) has greatly accelerated the rendering speed of novel view synthesis. Unlike neural implicit representations like Neural Radiance Fields (NeRF) that represent a 3D scene with position and viewpoint-conditioned neural networks, 3D Gaussian Splatting utilizes a set of Gaussian ellipsoids to model the scene so that efficient rendering can be accomplished by rasterizing Gaussian ellipsoids into images. Apart from the fast rendering speed, the explicit representation of 3D Gaussian Splatting facilitates editing tasks like dynamic reconstruction, geometry editing, and physical simulation. Considering the rapid change and growing number of works in this field, we present a literature review of recent 3D Gaussian Splatting methods, which can be roughly classified into 3D reconstruction, 3D editing, and other downstream applications by functionality. Traditional point-based rendering methods and the rendering formulation of 3D Gaussian Splatting are also illustrated for a better understanding of this technique. This survey aims to help beginners get into this field quickly and provide experienced researchers with a comprehensive overview, which can stimulate the future development of the 3D Gaussian Splatting representation.A Survey on 3D Gaussian Splatting
Guikun Chen, Wenguan Wang
arXiv preprint, 8 Jan 2024
[arXiv]
3D Gaussian as a New Vision Era: A Survey
Ben Fei, Jingyi Xu, Rui Zhang, Qingyuan Zhou, Weidong Yang, Ying He
arXiv preprint, 11 Feb 2024
[arXiv]
🔥Recent Advances in 3D Gaussian Splatting
Tong Wu, Yu-Jie Yuan, Ling-Xiao Zhang, Jie Yang, Yan-Pei Cao, Ling-Qi Yan, Lin Gao
arXiv preprint, 17 Mar 2024
Abstract
The emergence of 3D Gaussian Splatting (3DGS) has greatly accelerated the rendering speed of novel view synthesis. Unlike neural implicit representations like Neural Radiance Fields (NeRF) that represent a 3D scene with position and viewpoint-conditioned neural networks, 3D Gaussian Splatting utilizes a set of Gaussian ellipsoids to model the scene so that efficient rendering can be accomplished by rasterizing Gaussian ellipsoids into images. Apart from the fast rendering speed, the explicit representation of 3D Gaussian Splatting facilitates editing tasks like dynamic reconstruction, geometry editing, and physical simulation. Considering the rapid change and growing number of works in this field, we present a literature review of recent 3D Gaussian Splatting methods, which can be roughly classified into 3D reconstruction, 3D editing, and other downstream applications by functionality. Traditional point-based rendering methods and the rendering formulation of 3D Gaussian Splatting are also illustrated for a better understanding of this technique. This survey aims to help beginners get into this field quickly and provide experienced researchers with a comprehensive overview, which can stimulate the future development of the 3D Gaussian Splatting representation.[arXiv]
Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review
Anurag Dalal, Daniel Hagen, Kjell G. Robbersmyr, Kristian Muri Knausgård
arXiv preprint, 6 May 2024
[arXiv]
Survey on Fundamental Deep Learning 3D Reconstruction Techniques
Yonge Bai, LikHang Wong, TszYin Twan
arXiv preprint, 11 Jul 2024
[arXiv]
3D Gaussian Splatting: Survey, Technologies, Challenges, and Opportunities
Yanqi Bao, Tianyu Ding, Jing Huo, Yaoli Liu, Yuxin Li, Wenbin Li, Yang Gao, Jiebo Luo
arXiv preprint, 24 Jul 2024
[arXiv]
3D Representation Methods: A Survey
Zhengren Wang
arXiv preprint, 9 Oct 2024
[arXiv]
🔥GauStudio: A Modular Framework for 3D Gaussian Splatting and Beyond
Chongjie Ye, Yinyu Nie, Jiahao Chang, Yuantao Chen, Yihao Zhi, Xiaoguang Han
arXiv preprint, 28 Mar 2024
Abstract
We present GauStudio, a novel modular framework for modeling 3D Gaussian Splatting (3DGS) to provide standardized, plug-and-play components for users to easily customize and implement a 3DGS pipeline. Supported by our framework, we propose a hybrid Gaussian representation with foreground and skyball background models. Experiments demonstrate this representation reduces artifacts in unbounded outdoor scenes and improves novel view synthesis. Finally, we propose Gaussian Splatting Surface Reconstruction (GauS), a novel render-then-fuse approach for high-fidelity mesh reconstruction from 3DGS inputs without fine-tuning. Overall, our GauStudio framework, hybrid representation, and GauS approach enhance 3DGS modeling and rendering capabilities, enabling higher-quality novel view synthesis and surface reconstruction.gsplat: An Open-Source Library for Gaussian Splatting
Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa
arXiv preprint, 10 Sep 2024
[arXiv]
SuperSplat - 3D Gaussian Splat Editor
PlayCanvas
[Code]
NerfBaselines: Consistent and Reproducible Evaluation of Novel View Synthesis Methods
Jonas Kulhanek, Torsten Sattler
arXiv preprint, 25 Jun 2024
[arXiv] [Project]
RetinaGS: Scalable Training for Dense Scene Rendering with Billion-Scale 3D Gaussians
Bingling Li, Shengyi Chen, Luchao Wang, Kaimin He, Sijie Yan, Yuanjun Xiong
arXiv preprint, 17 Jun 2024
[arXiv]
On Scaling Up 3D Gaussian Splatting Training
Hexu Zhao, Haoyang Weng, Daohan Lu, Ang Li, Jinyang Li, Aurojit Panda, Saining Xie
arXiv preprint, 26 Jun 2024
[arXiv] [Project] [Code]
🔥Mip-Splatting: Alias-free 3D Gaussian Splatting
Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger
arXiv preprint, 27 Nov 2023
Abstract
Recently, 3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency. However, strong artifacts can be observed when changing the sampling rate, \eg, by changing focal length or camera distance. We find that the source for this phenomenon can be attributed to the lack of 3D frequency constraints and the usage of a 2D dilation filter. To address this problem, we introduce a 3D smoothing filter which constrains the size of the 3D Gaussian primitives based on the maximal sampling frequency induced by the input views, eliminating high-frequency artifacts when zooming in. Moreover, replacing 2D dilation with a 2D Mip filter, which simulates a 2D box filter, effectively mitigates aliasing and dilation issues. Our evaluation, including scenarios such a training on single-scale images and testing on multiple scales, validates the effectiveness of our approach.Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering
Zhiwen Yan, Weng Fei Low, Yu Chen, Gim Hee Lee
arXiv preprint, 28 Nov 2023
[arXiv] [Project] [Code] [Video]
🔥Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai
arXiv preprint, 30 Nov 2023
Abstract
Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations. However, it often leads to heavily redundant Gaussians that try to fit every training view, neglecting the underlying scene geometry. Consequently, the resulting model becomes less robust to significant view changes, texture-less area and lighting effects. We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage. We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations, without sacrificing the rendering speed.Gaussian Splitting Algorithm with Color and Opacity Depended on Viewing Direction
Dawid Malarz, Weronika Smolak, Jacek Tabor, Sławomir Tadeja, Przemysław Spurek
arXiv preprint, 21 Dec 2023
[arXiv]
🔥TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering
Linus Franke, Darius Rückert, Laura Fink, Marc Stamminger
Eurographics 2024, 11 Jan 2024
Abstract
Point-based radiance field rendering has demonstrated impressive results for novel view synthesis, offering a compelling blend of rendering quality and computational efficiency. However, also latest approaches in this domain are not without their shortcomings. 3D Gaussian Splatting [Kerbl and Kopanas et al. 2023] struggles when tasked with rendering highly detailed scenes, due to blurring and cloudy artifacts. On the other hand, ADOP [Rückert et al. 2022] can accommodate crisper images, but the neural reconstruction network decreases performance, it grapples with temporal instability and it is unable to effectively address large gaps in the point cloud. In this paper, we present TRIPS (Trilinear Point Splatting), an approach that combines ideas from both Gaussian Splatting and ADOP. The fundamental concept behind our novel technique involves rasterizing points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This approach allows rendering arbitrarily large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image including detail beyond splat resolution. Importantly, our render pipeline is entirely differentiable, allowing for automatic optimization of both point sizes and positions. Our evaluation demonstrate that TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. This performance extends to challenging scenarios, such as scenes featuring intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: this https URL[arXiv] [Project] [Code] [Video]
On the Error Analysis of 3D Gaussian Splatting and an Optimal Projection Strategy
Letian Huang, Jiayang Bai, Jie Guo, Yanwen Guo
ECCV 2024, 1 Feb 2024
[arXiv] [Project] [Code]
FreGS: 3D Gaussian Splatting with Progressive Frequency Regularization
Jiahui Zhang, Fangneng Zhan, Muyu Xu, Shijian Lu, Eric Xing
CVPR 2024, 11 Mar 2024
[arXiv] [Project]
🔥Analytic-Splatting: Anti-Aliased 3D Gaussian Splatting via Analytic Integration
Zhihao Liang, Qi Zhang, Wenbo Hu, Ying Feng, Lei Zhu, Kui Jia
ECCV 2024, 16 Mar 2024
Abstract
The 3D Gaussian Splatting (3DGS) gained its popularity recently by combining the advantages of both primitive-based and volumetric 3D representations, resulting in improved quality and efficiency for 3D scene rendering. However, 3DGS is not alias-free, and its rendering at varying resolutions could produce severe blurring or jaggies. This is because 3DGS treats each pixel as an isolated, single point rather than as an area, causing insensitivity to changes in the footprints of pixels. Consequently, this discrete sampling scheme inevitably results in aliasing, owing to the restricted sampling bandwidth. In this paper, we derive an analytical solution to address this issue. More specifically, we use a conditioned logistic function as the analytic approximation of the cumulative distribution function (CDF) in a one-dimensional Gaussian signal and calculate the Gaussian integral by subtracting the CDFs. We then introduce this approximation in the two-dimensional pixel shading, and present Analytic-Splatting, which analytically approximates the Gaussian integral within the 2D-pixel window area to better capture the intensity response of each pixel. Moreover, we use the approximated response of the pixel window integral area to participate in the transmittance calculation of volume rendering, making Analytic-Splatting sensitive to the changes in pixel footprint at different resolutions. Experiments on various datasets validate that our approach has better anti-aliasing capability that gives more details and better fidelity.🔥Mini-Splatting: Representing Scenes with a Constrained Number of Gaussians
Guangchi Fang, Bing Wang
ECCV 2024, 21 Mar 2024
Abstract
In this study, we explore the challenge of efficiently representing scenes with a constrained number of Gaussians. Our analysis shifts from traditional graphics and 2D computer vision to the perspective of point clouds, highlighting the inefficient spatial distribution of Gaussian representation as a key limitation in model performance. To address this, we introduce strategies for densification including blur split and depth reinitialization, and simplification through intersection preserving and sampling. These techniques reorganize the spatial positions of the Gaussians, resulting in significant improvements across various datasets and benchmarks in terms of rendering quality, resource consumption, and storage compression. Our Mini-Splatting integrates seamlessly with the original rasterization pipeline, providing a strong baseline for future research in Gaussian-Splatting-based works. \href{this https URL}{Code is available}.🔥Pixel-GS: Density Control with Pixel-aware Gradient for 3D Gaussian Splatting
Zheng Zhang, Wenbo Hu, Yixing Lao, Tong He, Hengshuang Zhao
ECCV 2024, 22 Mar 2024
Abstract
3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis results while advancing real-time rendering performance. However, it relies heavily on the quality of the initial point cloud, resulting in blurring and needle-like artifacts in areas with insufficient initializing points. This is mainly attributed to the point cloud growth condition in 3DGS that only considers the average gradient magnitude of points from observable views, thereby failing to grow for large Gaussians that are observable for many viewpoints while many of them are only covered in the boundaries. To this end, we propose a novel method, named Pixel-GS, to take into account the number of pixels covered by the Gaussian in each view during the computation of the growth condition. We regard the covered pixel numbers as the weights to dynamically average the gradients from different views, such that the growth of large Gaussians can be prompted. As a result, points within the areas with insufficient initializing points can be grown more effectively, leading to a more accurate and detailed reconstruction. In addition, we propose a simple yet effective strategy to scale the gradient field according to the distance to the camera, to suppress the growth of floaters near the camera. Extensive experiments both qualitatively and quantitatively demonstrate that our method achieves state-of-the-art rendering quality while maintaining real-time rendering speed, on the challenging Mip-NeRF 360 and Tanks & Temples datasets.🔥SA-GS: Scale-Adaptive Gaussian Splatting for Training-Free Anti-Aliasing
Xiaowei Song, Jv Zheng, Shiran Yuan, Huan-ang Gao, Jingwei Zhao, Xiang He, Weihao Gu, Hao Zhao
arXiv preprint, 28 Mar 2024
Abstract
In this paper, we present a Scale-adaptive method for Anti-aliasing Gaussian Splatting (SA-GS). While the state-of-the-art method Mip-Splatting needs modifying the training procedure of Gaussian splatting, our method functions at test-time and is training-free. Specifically, SA-GS can be applied to any pretrained Gaussian splatting field as a plugin to significantly improve the field's anti-alising performance. The core technique is to apply 2D scale-adaptive filters to each Gaussian during test time. As pointed out by Mip-Splatting, observing Gaussians at different frequencies leads to mismatches between the Gaussian scales during training and testing. Mip-Splatting resolves this issue using 3D smoothing and 2D Mip filters, which are unfortunately not aware of testing frequency. In this work, we show that a 2D scale-adaptive filter that is informed of testing frequency can effectively match the Gaussian scale, thus making the Gaussian primitive distribution remain consistent across different testing frequencies. When scale inconsistency is eliminated, sampling rates smaller than the scene frequency result in conventional jaggedness, and we propose to integrate the projected 2D Gaussian within each pixel during testing. This integration is actually a limiting case of super-sampling, which significantly improves anti-aliasing performance over vanilla Gaussian Splatting. Through extensive experiments using various settings and both bounded and unbounded scenes, we show SA-GS performs comparably with or better than Mip-Splatting. Note that super-sampling and integration are only effective when our scale-adaptive filtering is activated. Our codes, data and models are available at this https URL.Robust Gaussian Splatting
François Darmon, Lorenzo Porzi, Samuel Rota-Bulò, Peter Kontschieder
arXiv preprint, 5 Apr 2024
[arXiv]
Revising Densification in Gaussian Splatting
Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder
arXiv preprint, 9 Apr 2024
[arXiv]
EGGS: Edge Guided Gaussian Splatting for Radiance Fields
Yuanhao Gong
arXiv preprint, 14 Apr 2024
[arXiv]
🔥3D Gaussian Splatting as Markov Chain Monte Carlo
Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Weiwei Sun, Jeff Tseng, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi
arXiv preprint, 15 Apr 2024
Abstract
While 3D Gaussian Splatting has recently become popular for neural rendering, current methods rely on carefully engineered cloning and splitting strategies for placing Gaussians, which can lead to poor-quality renderings, and reliance on a good initialization. In this work, we rethink the set of 3D Gaussians as a random sample drawn from an underlying probability distribution describing the physical representation of the scene-in other words, Markov Chain Monte Carlo (MCMC) samples. Under this view, we show that the 3D Gaussian updates can be converted as Stochastic Gradient Langevin Dynamics (SGLD) updates by simply introducing noise. We then rewrite the densification and pruning strategies in 3D Gaussian Splatting as simply a deterministic state transition of MCMC samples, removing these heuristics from the framework. To do so, we revise the 'cloning' of Gaussians into a relocalization scheme that approximately preserves sample probability. To encourage efficient use of Gaussians, we introduce a regularizer that promotes the removal of unused Gaussians. On various standard evaluation scenes, we show that our method provides improved rendering quality, easy control over the number of Gaussians, and robustness to initialization.AbsGS: Recovering Fine Details for 3D Gaussian Splatting
Zongxin Ye, Wenyu Li, Sidun Liu, Peng Qiao, Yong Dou
arXiv preprint, 16 Apr 2024
[arXiv] [Project] [Code]
Gaussian Splatting Decoder for 3D-aware Generative Adversarial Networks
Florian Barthel, Arian Beckmann, Wieland Morgenstern, Anna Hilsmann, Peter Eisert
CVPRW 2024, 16 Apr 2024
[arXiv]
Does Gaussian Splatting need SFM Initialization?
Yalda Foroutan, Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi
arXiv preprint, 18 Apr 2024
[arXiv] [Project]
Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting
Yifei Gao, Jie Ou, Lei Wang, Jun Cheng
arXiv preprint, 29 Apr 2024
[arXiv]
Feature Splatting for Better Novel View Synthesis with Low Overlap
T. Berriel Martins, Javier Civera
arXiv preprint, 24 May 2024
[arXiv] [Code]
NegGS: Negative Gaussian Splatting
Artur Kasymov, Bartosz Czekaj, Marcin Mazur, Przemysław Spurek
arXiv preprint, 28 May 2024
[arXiv]
3D-HGS: 3D Half-Gaussian Splatting
Haolin Li, Jinyang Liu, Mario Sznaier, Octavia Camps
arXiv preprint, 4 Jun 2024
[arXiv]
Gaussian Splatting with Localized Points Management
Haosen Yang, Chenhao Zhang, Wenqing Wang, Marco Volino, Adrian Hilton, Li Zhang, Xiatian Zhu
arXiv preprint, 6 Jun 2024
[arXiv] [Code]
Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting
Junha Hyung, Susung Hong, Sungwon Hwang, Jaeseong Lee, Jaegul Choo, Jin-Hwa Kim
arXiv preprint, 17 Jun 2024
[arXiv] [Project]
Taming 3DGS: High-Quality Radiance Fields with Limited Resources
Saswat Subhajyoti Mallick, Rahul Goel, Bernhard Kerbl, Francisco Vicente Carrasco, Markus Steinberger, Fernando De La Torre
arXiv preprint, 21 Jun 2024
[arXiv]
SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting
Sara Sabour, Lily Goli, George Kopanas, Mark Matthews, Dmitry Lagun, Leonidas Guibas, Alec Jacobson, David J. Fleet, Andrea Tagliasacchi
arXiv preprint, 28 Jun 2024
[arXiv]
Textured-GS: Gaussian Splatting with Spatially Defined Color and Opacity
Zhentao Huang, Minglun Gong
arXiv preprint, 13 Jul 2024
[arXiv]
Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections
Congrong Xu, Justin Kerr, Angjoo Kanazawa
arXiv preprint, 17 Jul 2024
[arXiv] [Code]
MVG-Splatting: Multi-View Guided Gaussian Splatting with Adaptive Quantile-Based Geometric Consistency Densification
Zhuoxiao Li, Shanliang Yao, Yijie Chu, Angel F. Garcia-Fernandez, Yong Yue, Eng Gee Lim, Xiaohui Zhu
arXiv preprint, 16 Jul 2024
[arXiv] [Project]
3iGS: Factorised Tensorial Illumination for 3D Gaussian Splatting
Zhe Jun Tang, Tat-Jen Cham
ECCV 2024, 7 Aug 2024
[arXiv]
Mipmap-GS: Let Gaussians Deform with Scale-specific Mipmap for Anti-aliasing Rendering
Jiameng Li, Yue Shi, Jiezhang Cao, Bingbing Ni, Wenjun Zhang, Kai Zhang, Luc Van Gool
arXiv preprint, 12 Aug 2024
[arXiv]
FLoD: Integrating Flexible Level of Detail into 3D Gaussian Splatting for Customizable Rendering
Yunji Seo, Young Sun Choi, Hyun Seung Son, Youngjung Uh
arXiv preprint, 23 Aug 2024
[arXiv] [Project] [Code]
Robust 3D Gaussian Splatting for Novel View Synthesis in Presence of Distractors
Paul Ungermann, Armin Ettenhofer, Matthias Nießner, Barbara Roessle
GCPR 2024, 21 Aug 2024
[arXiv] [Project] [Video] [Code]
Implicit Gaussian Splatting with Efficient Multi-Level Tri-Plane Representation
Minye Wu, Tinne Tuytelaars
arXiv preprint, 19 Aug 2024
[arXiv]
Correspondence-Guided SfM-Free 3D Gaussian Splatting for NVS
Wei Sun, Xiaosong Zhang, Fang Wan, Yanzhao Zhou, Yuan Li, Qixiang Ye, Jianbin Jiao
arXiv preprint, 16 Aug 2024
[arXiv]
Sources of Uncertainty in 3D Scene Reconstruction
Marcus Klasson, Riccardo Mereu, Juho Kannala, Arno Solin
ECCV 2024, 10 Sep 2024
[arXiv] [Project] [Code]
Spectral-GS: Taming 3D Gaussian Splatting with Spectral Entropy
Letian Huang, Jie Guo, Jialin Dan, Ruoyu Fu, Shujie Wang, Yuanqi Li, Yanwen Guo
arXiv preprint, 19 Sep 2024
[arXiv]
GStex: Per-Primitive Texturing of 2D Gaussian Splatting for Decoupled Appearance and Geometry Modeling
Victor Rong, Jingxiang Chen, Sherwin Bahmani, Kiriakos N. Kutulakos, David B. Lindell
arXiv preprint, 19 Sep 2024
[arXiv] [Project]
Frequency-based View Selection in Gaussian Splatting Reconstruction
Monica M.Q. Li, Pierre-Yves Lajoie, Giovanni Beltrame
arXiv preprint, 24 Sep 2024
[arXiv]
MVGS: Multi-view-regulated Gaussian Splatting for Novel View Synthesis
Xiaobiao Du, Yida Wang, Xin Yu
arXiv preprint, 2 Oct 2024
[arXiv] [Project] [Code]
6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric Rendering
Zhongpai Gao, Benjamin Planche, Meng Zheng, Anwesa Choudhuri, Terrence Chen, Ziyan Wu
arXiv preprint, 7 Oct 2024
[arXiv] [Project]
PH-Dropout: Prctical Epistemic Uncertainty Quantification for View Synthesis
Chuanhao Sun, Thanos Triantafyllou, Anthos Makris, Maja Drmač, Kai Xu, Luo Mai, Mahesh K. Marina
arXiv preprint, 7 Oct 2024
[arXiv]
Variational Bayes Gaussian Splatting
Toon Van de Maele, Ozan Catal, Alexander Tschantz, Christopher L. Buckley, Tim Verbelen
arXiv preprint, 4 Oct 2024
[arXiv]
VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points
Linus Franke, Laura Fink, Marc Stamminger
arXiv preprint, 23 Oct 2024
[arXiv] [Project]
ODGS: 3D Scene Reconstruction from Omnidirectional Images with 3D Gaussian Splattings
Suyoung Lee, Jaeyoung Chung, Jaeyoo Huh, Kyoung Mu Lee
arXiv preprint, 28 Oct 2024
[arXiv] [Code]
Projecting Gaussian Ellipsoids While Avoiding Affine Projection Approximation
Han Qi, Tao Cai, Xiyue Han
arXiv preprint, 12 Nov 2024
[arXiv]
SplatFormer: Point Transformer for Robust 3D Gaussian Splatting
Yutong Chen, Marko Mihajlovic, Xiyi Chen, Yiming Wang, Sergey Prokudin, Siyu Tang
arXiv preprint, 10 Nov 2024
[arXiv] [Project] [Code]
BillBoard Splatting (BBSplat): Learnable Textured Primitives for Novel View Synthesis
David Svitov, Pietro Morerio, Lourdes Agapito, Alessio Del Bue
arXiv preprint, 13 Nov 2024
[arXiv] [Project] [Video] [Code]
Mini-Splatting2: Building 360 Scenes within Minutes via Aggressive Gaussian Densification
Guangchi Fang, Bing Wang
19 Nov 2024
[arXiv]
Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels
Haodong Chen, Runnan Chen, Qiang Qu, Zhaoqing Wang, Tongliang Liu, Xiaoming Chen, Yuk Ying Chung
19 Nov 2024
[arXiv] [Project]
Textured Gaussians for Enhanced 3D Scene Appearance Modeling
Brian Chao, Hung-Yu Tseng, Lorenzo Porzi, Chen Gao, Tuotuo Li, Qinbo Li, Ayush Saraf, Jia-Bin Huang, Johannes Kopf, Gordon Wetzstein, Changil Kim
27 Nov 2024
[arXiv] [Project]
3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes
Jan Held, Renaud Vandeghen, Abdullah Hamdi, Adrien Deliege, Anthony Cioppa, Silvio Giancola, Andrea Vedaldi, Bernard Ghanem, Marc Van Droogenbroeck
22 Nov 2024
[arXiv] [Project] [Video] [Code]
Deformable Radial Kernel Splatting
Yi-Hua Huang, Ming-Xian Lin, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi
16 Dec 2024
[arXiv]
Pushing Rendering Boundaries: Hard Gaussian Splatting
Qingshan Xu, Jiequan Cui, Xuanyu Yi, Yuxuan Wang, Yuan Zhou, Yew-Soon Ong, Hanwang Zhang
6 Dec 2024
[arXiv]
ResGS: Residual Densification of 3D Gaussian for Efficient Detail Recovery
Yanzhe Lyu, Kai Cheng, Xin Kang, Xuejin Chen
10 Dec 2024
[arXiv]
GS-ProCams: Gaussian Splatting-based Projector-Camera Systems
Qingyue Deng, Jijiang Li, Haibin Ling, Bingyao Huang
16 Dec 2024
[arXiv]
GeoTexDensifier: Geometry-Texture-Aware Densification for High-Quality Photorealistic 3D Gaussian Splatting
Hanqing Jiang, Xiaojun Xiang, Han Sun, Hongjie Li, Liyang Zhou, Xiaoyu Zhang, Guofeng Zhang
22 Dec 2024
[arXiv]
Topology-Aware 3D Gaussian Splatting: Leveraging Persistent Homology for Optimized Structural Integrity
Tianqi Shen, Shaohua Liu, Jiaqi Feng, Ziye Ma, Ning An
21 Dec 2024
[arXiv]
🔥Spectrally Pruned Gaussian Fields with Neural Compensation
Runyi Yang, Zhenxin Zhu, Zhou Jiang, Baijun Ye, Xiaoxue Chen, Yifei Zhang, Yuantao Chen, Jian Zhao, Hao Zhao
arXiv preprint, 1 May 2024
Abstract
Recently, 3D Gaussian Splatting, as a novel 3D representation, has garnered attention for its fast rendering speed and high rendering quality. However, this comes with high memory consumption, e.g., a well-trained Gaussian field may utilize three million Gaussian primitives and over 700 MB of memory. We credit this high memory footprint to the lack of consideration for the relationship between primitives. In this paper, we propose a memory-efficient Gaussian field named SUNDAE with spectral pruning and neural compensation. On one hand, we construct a graph on the set of Gaussian primitives to model their relationship and design a spectral down-sampling module to prune out primitives while preserving desired signals. On the other hand, to compensate for the quality loss of pruning Gaussians, we exploit a lightweight neural network head to mix splatted features, which effectively compensates for quality losses while capturing the relationship between primitives in its weights. We demonstrate the performance of SUNDAE with extensive results. For example, SUNDAE can achieve 26.80 PSNR at 145 FPS using 104 MB memory while the vanilla Gaussian splatting algorithm achieves 25.60 PSNR at 160 FPS using 523 MB memory, on the Mip-NeRF360 dataset. Codes are publicly available at this https URL.PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting
Alex Hanson, Allen Tu, Vasu Singla, Mayuka Jayawardhana, Matthias Zwicker, Tom Goldstein
arXiv preprint, 14 Jun 2024
[arXiv]
Don't Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media
Jorge Condor, Sebastien Speierer, Lukas Bode, Aljaz Bozic, Simon Green, Piotr Didyk, Adrian Jarabo
arXiv preprint, 24 May 2024
[arXiv]
Unified Gaussian Primitives for Scene Representation and Rendering
Yang Zhou, Songyin Wu, Ling-Qi Yan
arXiv preprint, 14 Jun 2024
[arXiv]
3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes
Nicolas Moenne-Loccoz, Ashkan Mirzaei, Or Perel, Riccardo de Lutio, Janick Martinez Esturo, Gavriel State, Sanja Fidler, Nicholas Sharp, Zan Gojcic
arXiv preprint, 9 Jul 2024
[arXiv]
RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis
Hugo Blanc, Jean-Emmanuel Deschaud, Alexis Paljic
arXiv preprint, 6 Aug 2024
[arXiv] [Project]
🔥EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis
Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jon Barron, Yinda Zhang
arXiv preprint, 2 Oct 2024
Abstract
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering. Unlike recent rasterization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering, rather than alpha compositing 3D Gaussian billboards. As such, unlike 3DGS our formulation does not suffer from popping artifacts and view dependent density, but still achieves frame rates of ∼30 FPS at 720p on an NVIDIA RTX4090. Since our approach is built upon ray tracing it enables effects such as defocus blur and camera distortion (e.g. such as from fisheye cameras), which are difficult to achieve by rasterization. We show that our method is more accurate with fewer blending issues than 3DGS and follow-up work on view-consistent rendering, especially on the challenging large-scale scenes from the Zip-NeRF dataset where it achieves sharpest results among real-time techniques.EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS
Sharath Girish, Kamal Gupta, Abhinav Shrivastava
arXiv preprint, 7 Dec, 2023
[arXiv] [Project] [Code]
🔥StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering
Lukas Radl, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl, Markus Steinberger
SIGGRAPH 2024, 1 Feb 2024
Abstract
Gaussian Splatting has emerged as a prominent model for constructing 3D representations from images across diverse domains. However, the efficiency of the 3D Gaussian Splatting rendering pipeline relies on several simplifications. Notably, reducing Gaussian to 2D splats with a single view-space depth introduces popping and blending artifacts during view rotation. Addressing this issue requires accurate per-pixel depth computation, yet a full per-pixel sort proves excessively costly compared to a global sort operation. In this paper, we present a novel hierarchical rasterization approach that systematically resorts and culls splats with minimal processing overhead. Our software rasterizer effectively eliminates popping artifacts and view inconsistencies, as demonstrated through both quantitative and qualitative measurements. Simultaneously, our method mitigates the potential for cheating view-dependent effects with popping, ensuring a more authentic representation. Despite the elimination of cheating, our approach achieves comparable quantitative results for test images, while increasing the consistency for novel view synthesis in motion. Due to its design, our hierarchical approach is only 4% slower on average than the original Gaussian Splatting. Notably, enforcing consistency enables a reduction in the number of Gaussians by approximately half with nearly identical quality and view-consistency. Consequently, rendering performance is nearly doubled, making our approach 1.6x faster than the original Gaussian Splatting, with a 50% reduction in memory requirements.[arXiv] [Project] [Code] [Video]
GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering
Abdullah Hamdi, Luke Melas-Kyriazi, Guocheng Qian, Jinjie Mai, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, Andrea Vedaldi
CVPR 2024, 15 Feb 2024
[arXiv] [Project] [Code] [Video]
OmniGS: Omnidirectional Gaussian Splatting for Fast Radiance Field Reconstruction using Omnidirectional Images
Longwei Li, Huajian Huang, Sai-Kit Yeung, Hui Cheng
arXiv preprint, 4 Apr 2024
[arXiv]
Hash3D: Training-free Acceleration for 3D Generation
Xingyi Yang, Xinchao Wang
arXiv preprint, 9 Apr 2024
[arXiv] [Project] [Code]
I3DGS: Improve 3D Gaussian Splatting from Multiple Dimensions
Jinwei Lin
arXiv preprint, 10 May 2024
[arXiv]
RTGS: Enabling Real-TimeGaussianSplatting on Mobile Devices Using Efficiency-Guided Pruning and Foveated Rendering
Weikai Lin, Yu Feng, Yuhao Zhu
arXiv preprint, 29 Jun 2024
[arXiv] [Code]
3DGS-LM: Faster Gaussian-Splatting Optimization with Levenberg-Marquardt
Lukas Höllein, Aljaž Božič, Michael Zollhöfer, Matthias Nießner
arXiv preprint, 19 Sep 2024
[arXiv] [Project] [Video] [Code]
Low Latency Point Cloud Rendering with Learned Splatting
Yueyu Hu, Ran Gong, Qi Sun, Yao Wang
CVPR 2024 Workshop on AIS, 24 Sep 2024
[arXiv] [Code]
Sort-free Gaussian Splatting via Weighted Sum Rendering
Qiqi Hou, Randall Rauwendaal, Zifeng Li, Hoang Le, Farzad Farhadzadeh, Fatih Porikli, Alexei Bourd, Amir Said
arXiv preprint, 24 Oct 2024
[arXiv]
Speedy-Splat: Fast 3D Gaussian Splatting with Sparse Pixels and Sparse Primitives
Alex Hanson, Allen Tu, Geng Lin, Vasu Singla, Matthias Zwicker, Tom Goldstein
30 Nov 2024
[arXiv]
Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering
Cheng Sun, Jaesung Choe, Charles Loop, Wei-Chiu Ma, Yu-Chiang Frank Wang
5 Dec 2024
[arXiv]
Volumetrically Consistent 3D Gaussian Rasterization
Chinmay Talegaonkar, Yash Belhe, Ravi Ramamoorthi, Nicholas Antipa
4 Dec 2024
[arXiv]
Faster and Better 3D Splatting via Group Training
Chengbo Wang, Guozheng Ma, Yifei Xue, Yizhen Lao
10 Dec 2024
[arXiv]
Turbo-GS: Accelerating 3D Gaussian Fitting for High-Quality Radiance Fields
Tao Lu, Ankit Dhiman, R Srinath, Emre Arslan, Angela Xing, Yuanbo Xiangli, R Venkatesh Babu, Srinath Sridhar
18 Dec 2024
[arXiv]
Balanced 3DGS: Gaussian-wise Parallelism Rendering with Fine-Grained Tiling
Hao Gui, Lin Hu, Rui Chen, Mingxiao Huang, Yuxin Yin, Jin Yang, Yong Wu
23 Dec 2024
[arXiv]
SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering
Antoine Guédon, Vincent Lepetit
arXiv preprint, 21 Nov 2023
[arXiv] [Project]
NeuSG: Neural Implicit Surface Reconstruction with 3D Gaussian Splatting Guidance
Hanlin Chen, Chen Li, Gim Hee Lee
arXiv preprint, 1 Dec, 2023
[arXiv]
AtomGS: Atomizing Gaussian Splatting for High-Fidelity Radiance Field
Rong Liu, Rui Xu, Yue Hu, Meida Chen, Andrew Feng
BMVC 2024, 20 May 2024
[arXiv] [Project] [Code] [Video]
🔥2D Gaussian Splatting for Geometrically Accurate Radiance Fields
Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, Shenghua Gao
SIGGRAPH 2024, 26 Mar 2024
Abstract
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking. However, 3DGS fails to accurately represent surfaces due to the multi-view inconsistent nature of 3D Gaussians. We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images. Our key idea is to collapse the 3D volume into a set of 2D oriented planar Gaussian disks. Unlike 3D Gaussians, 2D Gaussians provide view-consistent geometry while modeling surfaces intrinsically. To accurately recover thin surfaces and achieve stable optimization, we introduce a perspective-correct 2D splatting process utilizing ray-splat intersection and rasterization. Additionally, we incorporate depth distortion and normal consistency terms to further enhance the quality of the reconstructions. We demonstrate that our differentiable renderer allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.[arXiv] [Project] [Code] [Video]
GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction
Mulin Yu, Tao Lu, Linning Xu, Lihan Jiang, Yuanbo Xiangli, Bo Dai
arXiv preprint, 25 Mar 2024
[arXiv] [Project] [Code]
Modeling uncertainty for Gaussian Splatting
Luca Savant, Diego Valsesia, Enrico Magli
arXiv preprint, 27 Mar 2024
[arXiv]
Surface Reconstruction from Gaussian Splatting via Novel Stereo Views
Yaniv Wolf, Amit Bracha, Ron Kimmel
arXiv preprint, 2 Apr 2024
[arXiv] [Project]
Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes
Zehao Yu, Torsten Sattler, Andreas Geiger
arXiv preprint, 16 Apr 2024
[arXiv] [Project] [Code]
Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos
Isabella Liu, Hao Su, Xiaolong Wang
arXiv preprint, 18 Apr 2024
[arXiv] [Project]
Direct Learning of Mesh and Appearance via 3D Gaussian Splatting
Ancheng Lin, Jun Li
arXiv preprint, 11 May 2024
[arXiv]
TetSphere Splatting: Representing High-Quality Geometry with Lagrangian Volumetric Meshes
Minghao Guo, Bohan Wang, Kaiming He, Wojciech Matusik
arXiv preprint, 30 May 2024
[arXiv]
Tetrahedron Splatting for 3D Generation
Chun Gu, Zeyu Yang, Zijie Pan, Xiatian Zhu, Li Zhang
arXiv preprint, 3 Jun 2024
[arXiv] [Code]
RaDe-GS: Rasterizing Depth in Gaussian Splatting
Baowen Zhang, Chuan Fang, Rakesh Shrestha, Yixun Liang, Xiaoxiao Long, Ping Tan
arXiv preprint, 3 Jun 2024
[arXiv]
Trim 3D Gaussian Splatting for Accurate Geometry Representation
Lue Fan, Yuxue Yang, Minxing Li, Hongsheng Li, Zhaoxiang Zhang
arXiv preprint, 11 Jun 2024
[arXiv] [Project] [Code]
🔥PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction
Danpeng Chen, Hai Li, Weicai Ye, Yifan Wang, Weijian Xie, Shangjin Zhai, Nan Wang, Haomin Liu, Hujun Bao, Guofeng Zhang
arXiv prepreint, 10 Jun 2024
Abstract
Recently, 3D Gaussian Splatting (3DGS) has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guarantee geometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss. Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory. To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering. Specifically, we first introduce an unbiased depth rendering method, which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud, and divides the two to obtain the unbiased depth. We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy. We also propose a camera exposure compensation model to cope with scenes with large illumination variations. Experiments on indoor and outdoor scenes show that our method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods.VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction
Hanlin Chen, Fangyin Wei, Chen Li, Tianxin Huang, Yunsong Wang, Gim Hee Lee
arXiv preprint, 9 Jun 2024
[arXiv]
Projecting Radiance Fields to Mesh Surfaces
Adrian Xuan Wei Lim, Lynnette Hui Xian Ng, Nicholas Kyger, Tomo Michigami, Faraz Baghernezhad
SIGGRAPH Poster 2024, 17 Jun 2024
[arXiv]
GS-Octree: Octree-based 3D Gaussian Splatting for Robust Object-level 3D Reconstruction Under Strong Lighting
Jiaze Li, Zhengyu Wen, Luo Zhang, Jiangbei Hu, Fei Hou, Zhebin Zhang, Ying He
arXiv preprint, 26 Jun 2024
[arXiv]
2DGH: 2D Gaussian-Hermite Splatting for High-quality Rendering and Better Geometry Reconstruction
Ruihan Yu, Tianyu Huang, Jingwang Ling, Feng Xu
arXiv preprint, 30 Aug 2024
[arXiv]
Spurfies: Sparse Surface Reconstruction using Local Geometry Priors
Kevin Raj, Christopher Wewer, Raza Yunus, Eddy Ilg, Jan Eric Lenssen
arXiv preprint, 29 Aug 2024
[arXiv] [Project]
Spiking GS: Towards High-Accuracy and Low-Cost Surface Reconstruction via Spiking Neuron-based Gaussian Splatting
Weixing Zhang, Zongrui Li, De Ma, Huajin Tang, Xudong Jiang, Qian Zheng, Gang Pan
arXiv preprint, 9 Oct 2024
[arXiv] [Code]
Normal-GS: 3D Gaussian Splatting with Normal-Involved Rendering
Meng Wei, Qianyi Wu, Jianmin Zheng, Hamid Rezatofighi, Jianfei Cai
NeurIPS 2024, 27 Oct 2024
[arXiv]
🔥GVKF: Gaussian Voxel Kernel Functions for Highly Efficient Surface Reconstruction in Open Scenes
Gaochao Song, Chong Cheng, Hao Wang
NeurIPS 2024, 4 Nov 2024
Abstract
In this paper we present a novel method for efficient and effective 3D surface reconstruction in open scenes. Existing Neural Radiance Fields (NeRF) based works typically require extensive training and rendering time due to the adopted implicit representations. In contrast, 3D Gaussian splatting (3DGS) uses an explicit and discrete representation, hence the reconstructed surface is built by the huge number of Gaussian primitives, which leads to excessive memory consumption and rough surface details in sparse Gaussian areas. To address these issues, we propose Gaussian Voxel Kernel Functions (GVKF), which establish a continuous scene representation based on discrete 3DGS through kernel regression. The GVKF integrates fast 3DGS rasterization and highly effective scene implicit representations, achieving high-fidelity open scene surface reconstruction. Experiments on challenging scene datasets demonstrate the efficiency and effectiveness of our proposed GVKF, featuring with high reconstruction quality, real-time rendering speed, significant savings in storage and training memory consumption.[arXiv]
DyGASR: Dynamic Generalized Exponential Splatting with Surface Alignment for Accelerated 3D Mesh Reconstruction
Shengchao Zhao, Yundong Li
arXiv preprint, 14 Nov 2024
[arXiv]
Quadratic Gaussian Splatting for Efficient and Detailed Surface Reconstruction
Ziyu Zhang, Binbin Huang, Hanqing Jiang, Liyang Zhou, Xiaojun Xiang, Shunhan Shen
25 Nov 2024
[arXiv]
Geometry Field Splatting with Gaussian Surfels
Kaiwen Jiang, Venkataram Sivaram, Cheng Peng, Ravi Ramamoorthi
26 Nov 2024
[arXiv]
G2SDF: Surface Reconstruction from Explicit Gaussians with Implicit SDFs
Kunyi Li, Michael Niemeyer, Zeyu Chen, Nassir Navab, Federico Tombari
25 Nov 2024
[arXiv]
GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision
Xu Baixin, Hu Jiangbei, Li Jiaze, He Ying
24 Nov 2024
[arXiv] [Code]
SplatSDF: Boosting Neural Implicit SDF via Gaussian Splatting Fusion
Runfa Blark Li, Keito Suzuki, Bang Du, Ki Myung Brian Le, Nikolay Atanasov, Truong Nguyen
23 Nov 2024
[arXiv]
HDGS: Textured 2D Gaussian Splatting for Enhanced Scene Rendering
Yunzhou Song, Heguang Lin, Jiahui Lei, Lingjie Liu, Kostas Daniilidis
2 Dec 2024
[arXiv] [Project] [[Code])(https://github.com/TimSong412/HDGS)]
Ref-GS: Directional Factorization for 2D Gaussian Splatting
Youjia Zhang, Anpei Chen, Yumin Wan, Zikai Song, Junqing Yu, Yawei Luo, Wei Yang
1 Dec 2024
[arXiv] [Project]
GausSurf: Geometry-Guided 3D Gaussian Splatting for Surface Reconstruction
Jiepeng Wang, Yuan Liu, Peng Wang, Cheng Lin, Junhui Hou, Xin Li, Taku Komura, Wenping Wang
29 Nov 2024
[arXiv] [Project] [Code]
Integrating Meshes and 3D Gaussians for Indoor Scene Reconstruction with SAM Mask Guidance
Jiyeop Kim, Jongwoo Lim
arXiv preprint, 23 Jul 2024
[arXiv]
Enhancement of 3D Gaussian Splatting using Raw Mesh for Photorealistic Recreation of Architectures
Ruizhe Wang, Chunliang Hua, Tomakayev Shingys, Mengyuan Niu, Qingxin Yang, Lizhong Gao, Yi Zheng, Junyan Yang, Qiao Wang
arXiv preprint, 22 Jul 2024
[arXiv]
Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis
Jonathon Luiten, Georgios Kopanas, Bastian Leibe, Deva Ramanan
arXiv preprint, 18 Aug 2023
[arXiv] [Project] [Github]
🔥Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction
Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, Xiaogang Jin
arXiv preprint, 22 Sep 2023
Abstract
Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic neural rendering methods rely heavily on these implicit representations, which frequently struggle to capture the intricate details of objects in the scene. Furthermore, implicit methods have difficulty achieving real-time rendering in general dynamic scenes, limiting their use in a variety of tasks. To address the issues, we propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space with a deformation field to model monocular dynamic scenes. We also introduce an annealing smoothing training mechanism with no extra overhead, which can mitigate the impact of inaccurate poses on the smoothness of time interpolation tasks in real-world datasets. Through a differential Gaussian rasterizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed. Experiments show that our method outperforms existing methods significantly in terms of both rendering quality and speed, making it well-suited for tasks such as novel-view synthesis, time interpolation, and real-time rendering.[arXiv]
4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, Xinggang Wang
arXiv preprint, 12 Oct 2023
[arXiv] [Project] [Github]
Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting
Zeyu Yang, Hongye Yang, Zijie Pan, Xiatian Zhu, Li Zhang
arXiv preprint, 16 Oct 2023
[arXiv]
Neural Parametric Gaussians for Monocular Non-Rigid Object Reconstruction
Devikalyan Das, Christopher Wewer, Raza Yunus, Eddy Ilg, Jan Eric Lenssen
arXiv preprint, 2 Dec 2023
[arXiv]
Gaussian-Flow: 4D Reconstruction with Dynamic 3D Gaussian Particle
Youtian Lin, Zuozhuo Dai, Siyu Zhu, Yao Yao
arXiv preprint, 6 Dec 2023
[arXiv]
CoGS: Controllable Gaussian Splatting
Heng Yu, Joel Julin, Zoltán Á. Milacski, Koichiro Niinuma, László A. Jeni
CVPR 2024, 9 Dec 2023
[arXiv]
GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis
Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao
arXiv preprint, 18 Dec 2023
[arXiv] [Project]
🔥SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, Xiaojuan Qi
CVPR 2024, 4 Dec 2023
Abstract
Novel view synthesis for dynamic scenes is still a challenging problem in computer vision and graphics. Recently, Gaussian splatting has emerged as a robust technique to represent static scenes and enable high-quality and real-time novel view synthesis. Building upon this technique, we propose a new representation that explicitly decomposes the motion and appearance of dynamic scenes into sparse control points and dense Gaussians, respectively. Our key idea is to use sparse control points, significantly fewer in number than the Gaussians, to learn compact 6 DoF transformation bases, which can be locally interpolated through learned interpolation weights to yield the motion field of 3D Gaussians. We employ a deformation MLP to predict time-varying 6 DoF transformations for each control point, which reduces learning complexities, enhances learning abilities, and facilitates obtaining temporal and spatial coherent motion patterns. Then, we jointly learn the 3D Gaussians, the canonical space locations of control points, and the deformation MLP to reconstruct the appearance, geometry, and dynamics of 3D scenes. During learning, the location and number of control points are adaptively adjusted to accommodate varying motion complexities in different regions, and an ARAP loss following the principle of as rigid as possible is developed to enforce spatial continuity and local rigidity of learned motions. Finally, thanks to the explicit sparse motion representation and its decomposition from appearance, our method can enable user-controlled motion editing while retaining high-fidelity appearances. Extensive experiments demonstrate that our approach outperforms existing approaches on novel view synthesis with a high rendering speed and enables novel appearance-preserved motion editing applications. Project page: this https URL[arXiv] [Project] [Code] [Video]
Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis
Zhan Li, Zhang Chen, Zhong Li, Yi Xu
CVPR 2024, 28 Dec 2023
[arXiv] [Project] [Code] [Video]
4D Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes
Yuanxing Duan, Fangyin Wei, Qiyu Dai, Yuhang He, Wenzheng Chen, Baoquan Chen
arXiv preprint, 5 Feb 2024
[arXiv] [Code]
Mesh-based Gaussian Splatting for Real-time Large-scale Deformation
Lin Gao, Jie Yang, Bo-Tao Zhang, Jia-Mu Sun, Yu-Jie Yuan, Hongbo Fu, Yu-Kun Lai
arXiv preprint, 7 Feb 2024
[arXiv]
GaussianFlow: Splatting Gaussian Dynamics for 4D Content Creation
Quankai Gao, Qiangeng Xu, Zhe Cao, Ben Mildenhall, Wenchao Ma, Le Chen, Danhang Tang, Ulrich Neumann
arXiv preprint, 19 Mar 2024
[arXiv] [Project] [Code] [Video]
Per-Gaussian Embedding-Based Deformation for Deformable 3D Gaussian Splatting
Jeongmin Bae, Seoha Kim, Youngsik Yun, Hahyun Lee, Gun Bang, Youngjung Uh
arXiv preprint, 4 Apr 2024
[arXiv] [Project] [Code]
3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis
Zhicheng Lu, Xiang Guo, Le Hui, Tianrui Chen, Min Yang, Xiao Tang, Feng Zhu, Yuchao Dai
CVPR 2024, 9 Apr 2024
[arXiv] [Project]
Gaussian Time Machine: A Real-Time Rendering Methodology for Time-Variant Appearances
Licheng Shen, Ho Ngai Chow, Lingyun Wang, Tong Zhang, Mengqiu Wang, Yuxing Han
arXiv preprint, 22 May 2024
[arXiv]
MoSca: Dynamic Gaussian Fusion from Casual Videos via 4D Motion Scaffolds
Jiahui Lei, Yijia Weng, Adam Harley, Leonidas Guibas, Kostas Daniilidis
arXiv preprint, 27 May 2024
[arXiv] [Project] [Video]
GSDeformer: Direct Cage-based Deformation for 3D Gaussian Splatting
Jiajun Huang, Hongchuan Yu
arXiv preprint, 24 May 2024
[arXiv] [Project] [Video]
GFlow: Recovering 4D World from Monocular Video
Shizun Wang, Xingyi Yang, Qiuhong Shen, Zhenxiang Jiang, Xinchao Wang
arXiv preprint, 28 May 2024
[arXiv] [Project]
A Refined 3D Gaussian Representation for High-Quality Dynamic Scene Reconstruction
Bin Zhang, Bi Zeng, Zexin Peng
arXiv preprint, 28 May 2024
[arXiv]
Object-centric Reconstruction and Tracking of Dynamic Unknown Objects using 3D Gaussian Splatting
Kuldeep R Barad, Antoine Richard, Jan Dentler, Miguel Olivares-Mendez, Carol Martinez
IEEE Space Robotics 2024, 30 May 2024
[arXiv]
GaussianPrediction: Dynamic 3D Gaussian Prediction for Motion Extrapolation and Free View Synthesis
Boming Zhao, Yuan Li, Ziyu Sun, Lin Zeng, Yujun Shen, Rui Ma, Yinda Zhang, Hujun Bao, Zhaopeng Cui
SIGGRAPH 2024, 30 May 2024
[arXiv] [Project]
Reconstructing and Simulating Dynamic 3D Objects with Mesh-adsorbed Gaussian Splatting
Shaojie Ma, Yawei Luo, Yi Yang
arXiv preprint, 3 Jun 2024
[arXiv] [Project] [Code]
Self-Calibrating 4D Novel View Synthesis from Monocular Videos Using Gaussian Splatting
Fang Li, Hao Zhang, Narendra Ahuja
arXiv preprint, 3 Jun 2024
[arXiv] [Code]
🔥Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction
Diwen Wan, Ruijie Lu, Gang Zeng
ICML 2024, 6 Jun 2024
Abstract
Rendering novel view images in dynamic scenes is a crucial yet challenging task. Current methods mainly utilize NeRF-based methods to represent the static scene and an additional time-variant MLP to model scene deformations, resulting in relatively low rendering quality as well as slow inference speed. To tackle these challenges, we propose a novel framework named Superpoint Gaussian Splatting (SP-GS). Specifically, our framework first employs explicit 3D Gaussians to reconstruct the scene and then clusters Gaussians with similar properties (e.g., rotation, translation, and location) into superpoints. Empowered by these superpoints, our method manages to extend 3D Gaussian splatting to dynamic scenes with only a slight increase in computational expense. Apart from achieving state-of-the-art visual quality and real-time rendering under high resolutions, the superpoint representation provides a stronger manipulation capability. Extensive experiments demonstrate the practicality and effectiveness of our approach on both synthetic and real-world datasets. Please see our project page at this https URL.MoDGS: Dynamic Gaussian Splatting from Causually-captured Monocular Videos
Qingming Liu, Yuan Liu, Jiepeng Wang, Xianqiang Lv, Peng Wang, Wenping Wang, Junhui Hou
arXiv preprint, 1 Jun 2024
[arXiv]
DGD: Dynamic 3D Gaussians Distillation
Isaac Labe, Noam Issachar, Itai Lang, Sagie Benaim
arXiv preprint, 29 May 2024
[arXiv] [Project] [Code]
Modeling Ambient Scene Dynamics for Free-view Synthesis
Meng-Li Shih, Jia-Bin Huang, Changil Kim, Rajvi Shah, Johannes Kopf, Chen Gao
SIGGRAPH 2024, 13 Jun 2024
[arXiv] [Project]
Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos
Colton Stearns, Adam Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, Leonidas Guibas
arXiv preprint, 26 Jun 2024
[arXiv]
Gaussian Splatting LK
Liuyue Xie, Joel Julin, Koichiro Niinuma, Laszlo A. Jeni
arXiv preprint, 16 Jul 2024
[arXiv]
S4D: Streaming 4D Real-World Reconstruction with Gaussians and 3D Control Points
Bing He, Yunuo Chen, Guo Lu, Li Song, Wenjun Zhang
arXiv preprint, 23 Aug 2024
[arXiv]
SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction
Marko Mihajlovic, Sergey Prokudin, Siyu Tang, Robert Maier, Federica Bogo, Tony Tung, Edmond Boyer
ECCV 2024, 17 Sep 2024
[arXiv]
MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting
Ruijie Zhu, Yanzhe Liang, Hanzhi Chang, Jiacheng Deng, Jiahao Lu, Wenfei Yang, Tianzhu Zhang, Yongdong Zhang
NeurIPS 2024, 10 Oct 2024
[arXiv] [Project]
DN-4DGS: Denoised Deformable Network with Temporal-Spatial Aggregation for Dynamic Scene Rendering
Jiahao Lu, Jiacheng Deng, Ruijie Zhu, Yanzhe Liang, Wenfei Yang, Tianzhu Zhang, Xu Zhou
NeurIPS 2024, 17 Oct 2024
[arXiv]
MEGA: Memory-Efficient 4D Gaussian Splatting for Dynamic Scenes
Xinjie Zhang, Zhening Liu, Yifan Zhang, Xingtong Ge, Dailan He, Tongda Xu, Yan Wang, Zehong Lin, Shuicheng Yan, Jun Zhang
arXiv preprint, 17 Oct 2024
[arXiv]
Fully Explicit Dynamic Gaussian Splatting
Junoh Lee, Chang-Yeon Won, Hyunjun Jung, Inhwan Bae, Hae-Gon Jeon
NeurIPS 2024, 21 Oct 2024
[arXiv]
FreeGaussian: Guidance-free Controllable 3D Gaussian Splats with Flow Derivatives
Qizhi Chen, Delin Qu, Yiwen Tang, Haoming Song, Yiting Zhang, Dong Wang, Bin Zhao, Xuelong Li
arXiv preprint, 29 Oct 2024
[arXiv] [Project] [Code]
Grid4D: 4D Decomposed Hash Encoding for High-fidelity Dynamic Gaussian Splatting
Jiawei Xu, Zexin Fan, Jian Yang, Jin Xie
NeurIPS 2024, 28 Oct 2024
[arXiv]
HiCoM: Hierarchical Coherent Motion for Streamable Dynamic Scene with 3D Gaussian Splatting
Qiankun Gao, Jiarui Meng, Chengxiang Wen, Jie Chen, Jian Zhang
NeurIPS 2024, 12 Nov 2024
[arXiv] [Code]
Adaptive and Temporally Consistent Gaussian Surfels for Multi-view Dynamic Reconstruction
Decai Chen, Brianne Oberson, Ingo Feldmann, Oliver Schreer, Anna Hilsmann, Peter Eisert
arXiv preprint, 10 Nov 2024
[arXiv] [Project]
4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization
Mijeong Kim, Jongwoo Lim, Bohyung Han
NeurIPS 2024, 13 Nov 2024
[arXiv]
Sketch-guided Cage-based 3D Gaussian Splatting Deformation
Tianhao Xie, Noam Aigerman, Eugene Belilovsky, Tiberiu Popa
arXiv preprint, 19 Nov 2024
[arXiv]
TimeFormer: Capturing Temporal Relationships of Deformable 3D Gaussians for Robust Reconstruction
DaDong Jiang, Zhihui Ke, Xiaobo Zhou, Zhi Hou, Xianghui Yang, Wenbo Hu, Tie Qiu, Chunchao Guo
18 Nov 2024
[arXiv] [Project]
4D Scaffold Gaussian Splatting for Memory Efficient Dynamic Scene Reconstruction
Woong Oh Cho, In Cho, Seoha Kim, Jeongmin Bae, Youngjung Uh, Seon Joo Kim
26 Nov 2024
[arXiv]
Event-boosted Deformable 3D Gaussians for Fast Dynamic Scene Reconstruction
Wenhao Xu, Wenming Weng, Yueyi Zhang, Ruikang Xu, Zhiwei Xiong
25 Nov 2024
[arXiv]
RelayGS: Reconstructing Dynamic Scenes with Large-Scale and Complex Motions via Relay Gaussians
Qiankun Gao, Yanmin Wu, Chengxiang Wen, Jiarui Meng, Luyang Tang, Jie Chen, Ronggang Wang, Jian Zhang
3 Dec 2024
[arXiv] [Code]
Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps
Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, James Tompkin, Adam W. Harley
5 Dec 2024
[arXiv] [Project] [Code]
Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction
Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong
4 Dec 2024
[arXiv]
HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting
*Jingyu Lin, Jiaqi Gu, Lubin Fan, Bojian Wu, Yujing Lou, Renjie Chen, Ligang Liu, Jieping *
5 Dec 2024
[arXiv] [Project] [Code]
Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis
Diwen Wan, Yuxiang Wang, Ruijie Lu, Gang Zeng
NeurIPS 2024, 7 Dec 2024
[arXiv]
4D Gaussian Splatting with Scale-aware Residual Field and Adaptive Optimization for Real-time Rendering of Temporally Complex Dynamic Scenes
Jinbo Yan, Rui Peng, Luyang Tang, Ronggang Wang
9 Dec 2024
[arXiv] [Project]
Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video
Renlong Wu, Zhilu Zhang, Mingyang Chen, Xiaopeng Fan, Zifei Yan, Wangmeng Zuo
9 Dec 2024
[arXiv] [Code]
SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video
Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, Munchurl Kim
13 Dec 2024
[arXiv]
🔥DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization
Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Xin Ning, Jun Zhou, Lin Gu
CVPR 2024, 11 Mar 2024
Abstract
Radiance fields have demonstrated impressive performance in synthesizing novel views from sparse input views, yet prevailing methods suffer from high training costs and slow inference speed. This paper introduces DNGaussian, a depth-regularized framework based on 3D Gaussian radiance fields, offering real-time and high-quality few-shot novel view synthesis at low costs. Our motivation stems from the highly efficient representation and surprising quality of the recent 3D Gaussian Splatting, despite it will encounter a geometry degradation when input views decrease. In the Gaussian radiance fields, we find this degradation in scene geometry primarily lined to the positioning of Gaussian primitives and can be mitigated by depth constraint. Consequently, we propose a Hard and Soft Depth Regularization to restore accurate scene geometry under coarse monocular depth supervision while maintaining a fine-grained color appearance. To further refine detailed geometry reshaping, we introduce Global-Local Depth Normalization, enhancing the focus on small local depth changes. Extensive experiments on LLFF, DTU, and Blender datasets demonstrate that DNGaussian outperforms state-of-the-art methods, achieving comparable or better results with significantly reduced memory cost, a 25× reduction in training time, and over 3000× faster rendering speed.[arXiv] [Project] [Code] [Video]
🔥DN-Splatter: Depth and Normal Priors for Gaussian Splatting and Meshing
Matias Turkulainen, Xuqian Ren, Iaroslav Melekhov, Otto Seiskari, Esa Rahtu, Juho Kannala
arXiv preprint, 26 Mar 2024
Abstract
High-fidelity 3D reconstruction of common indoor scenes is crucial for VR and AR applications. 3D Gaussian splatting, a novel differentiable rendering technique, has achieved state-of-the-art novel view synthesis results with high rendering speeds and relatively low training times. However, its performance on scenes commonly seen in indoor datasets is poor due to the lack of geometric constraints during optimization. In this work, we explore the use of readily accessible geometric cues to enhance Gaussian splatting optimization in challenging, ill-posed, and textureless scenes. We extend 3D Gaussian splatting with depth and normal cues to tackle challenging indoor datasets and showcase techniques for efficient mesh extraction. Specifically, we regularize the optimization procedure with depth information, enforce local smoothness of nearby Gaussians, and use off-the-shelf monocular networks to achieve better alignment with the true scene geometry. We propose an adaptive depth loss based on the gradient of color images, improving depth estimation and novel view synthesis results over various baselines. Our simple yet effective regularization technique enables direct mesh extraction from the Gaussian representation, yielding more physically accurate reconstructions of indoor scenes.[arXiv]
HoloGS: Instant Depth-based 3D Gaussian Splatting with Microsoft HoloLens 2
Miriam Jäger, Theodor Kapler, Michael Feßenbecker, Felix Birkelbach, Markus Hillemann, Boris Jutzi
arXiv preprint, 3 May 2024
[arXiv]
Self-Evolving Depth-Supervised 3D Gaussian Splatting from Rendered Stereo Pairs
Sadra Safadoust, Fabio Tosi, Fatma Güney, Matteo Poggi
BMVC 2024, 11 Sep 2024
[arXiv] [Project] [Code]
Depth Estimation Based on 3D Gaussian Splatting Siamese Defocus
Jinchang Zhang, Ningning Xu, Hao Zhang, Guoyu Lu
arXiv preprint, 18 Sep 2024
[arXiv]
Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot Images
Jaeyoung Chung, Jeongtaek Oh, Kyoung Mu Lee
arXiv preprint, 22 Nov 2023
[arXiv]
FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting
Zehao Zhu, Zhiwen Fan, Yifan Jiang, Zhangyang Wang
arXiv preprint, 1 Dec 2023
[arXiv] [Project]
Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers
Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, Song-Hai Zhang
arXiv preprint, 14 Dec 2023
[arXiv] [Project] [Code]
pixelSplat: 3D Gaussian Splats from Image Pairs for Scalable Generalizable 3D Reconstruction
David Charatan, Sizhe Li, Andrea Tagliasacchi, Vincent Sitzmann
arXiv preprint, 19 Dec 2023
[arXiv] [Project] [Code]
AGG: Amortized Generative 3D Gaussians for Single Image to 3D
Dejia Xu, Ye Yuan, Morteza Mardani, Sifei Liu, Jiaming Song, Zhangyang Wang, Arash Vahdat
arXiv preprint, 8 Jan 2024
[arXiv] [Project]
GaussianObject: Just Taking Four Images to Get A High-Quality 3D Object with Gaussian Splatting
Chen Yang, Sikuang Li, Jiemin Fang, Ruofan Liang, Lingxi Xie, Xiaopeng Zhang, Wei Shen, Qi Tian
arXiv preprint, 15 Feb 2024
[arXiv] [Project]
FDGaussian: Fast Gaussian Splatting from Single Image via Geometric-aware Diffusion Model
Qijun Feng, Zhen Xing, Zuxuan Wu, Yu-Gang Jiang
arXiv preprint, 15 Mar 2024
[arXiv] [Project]
Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction
Qiuhong Shen, Xuanyu Yi, Zike Wu, Pan Zhou, Hanwang Zhang, Shuicheng Yan, Xinchao Wang
arXiv preprint, 27 Mar 2024
[arXiv] [Project]
🔥InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds
Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, Zhangyang Wang, Yue Wang
arXiv preprint, 29 Mar 2024
Abstract
While novel view synthesis (NVS) from a sparse set of images has advanced significantly in 3D computer vision, it relies on precise initial estimation of camera parameters using Structure-from-Motion (SfM). For instance, the recently developed Gaussian Splatting depends heavily on the accuracy of SfM-derived points and poses. However, SfM processes are time-consuming and often prove unreliable in sparse-view scenarios, where matched features are scarce, leading to accumulated errors and limited generalization capability across datasets. In this study, we introduce a novel and efficient framework to enhance robust NVS from sparse-view images. Our framework, InstantSplat, integrates multi-view stereo(MVS) predictions with point-based representations to construct 3D Gaussians of large-scale scenes from sparse-view data within seconds, addressing the aforementioned performance and efficiency issues by SfM. Specifically, InstantSplat generates densely populated surface points across all training views and determines the initial camera parameters using pixel-alignment. Nonetheless, the MVS points are not globally accurate, and the pixel-wise prediction from all views results in an excessive Gaussian number, yielding a overparameterized scene representation that compromises both training speed and accuracy. To address this issue, we employ a grid-based, confidence-aware Farthest Point Sampling to strategically position point primitives at representative locations in parallel. Next, we enhance pose accuracy and tune scene parameters through a gradient-based joint optimization framework from self-supervision. By employing this simplified framework, InstantSplat achieves a substantial reduction in training time, from hours to mere seconds, and demonstrates robust performance across various numbers of views in diverse datasets.CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
Avinash Paliwal, Wei Ye, Jinhui Xiong, Dmytro Kotovenko, Rakesh Ranjan, Vikas Chandra, Nima Khademi Kalantari
arXiv preprint, 28 Mar 2024
[arXiv] [Project]
Guess The Unseen: Dynamic 3D Scene Reconstruction from Partial 2D Glimpses
Inhee Lee, Byungjun Kim, Hanbyul Joo
arXiv preprint, 22 Apr 2024
[arXiv] [Project]
GDGS: Gradient Domain Gaussian Splatting for Sparse Representation of Radiance Fields
Yuanhao Gong
arXiv preprint, 8 May 2024
[arXiv]
CoR-GS: Sparse-View 3D Gaussian Splatting via Co-Regularization
Jiawei Zhang, Jiahe Li, Xiaohan Yu, Lei Huang, Lin Gu, Jin Zheng, Xiao Bai
arXiv preprint, 20 May 2024
[arXiv] [Project] [Video]
Sp2360: Sparse-view 360 Scene Reconstruction using Cascaded 2D Diffusion Priors
Soumava Paul, Christopher Wewer, Bernt Schiele, Jan Eric Lenssen
arXiv preprint, 26 May 2024
[arXiv]
A Pixel Is Worth More Than One 3D Gaussians in Single-View 3D Reconstruction
Jianghao Shen, Tianfu Wu
arXiv preprint, 30 May 2024
[arXiv]
GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction
Yuxuan Mu, Xinxin Zuo, Chuan Guo, Yilin Wang, Juwei Lu, Xiaofeng Wu, Songcen Xu, Peng Dai, Youliang Yan, Li Cheng
ECCV 2024, 5 Jul 2024
[arXiv]
Self-augmented Gaussian Splatting with Structure-aware Masks for Sparse-view 3D Reconstruction
Lingbei Meng, Bi'an Du, Wei Hu
arXiv preprint, 9 Aug 2024
[arXiv]
🔥ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
Fangfu Liu, Wenqiang Sun, Hanyang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, Yueqi Duan
arXiv preprint, 29 Aug 2024
Abstract
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos. Despite great success in dense-view reconstruction scenarios, rendering a detailed scene from insufficient captured views is still an ill-posed optimization problem, often resulting in artifacts and distortions in unseen areas. In this paper, we propose ReconX, a novel 3D scene reconstruction paradigm that reframes the ambiguous reconstruction challenge as a temporal generation task. The key insight is to unleash the strong generative prior of large pre-trained video diffusion models for sparse-view reconstruction. However, 3D view consistency struggles to be accurately preserved in directly generated video frames from pre-trained models. To address this, given limited input views, the proposed ReconX first constructs a global point cloud and encodes it into a contextual space as the 3D structure condition. Guided by the condition, the video diffusion model then synthesizes video frames that are both detail-preserved and exhibit a high degree of 3D consistency, ensuring the coherence of the scene from various perspectives. Finally, we recover the 3D scene from the generated video through a confidence-aware 3D Gaussian Splatting optimization scheme. Extensive experiments on various real-world datasets show the superiority of our ReconX over state-of-the-art methods in terms of quality and generalizability.[arXiv] [Project] [Video] [Code]
🔥ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis
Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien-Tsin Wong, Ying Shan, Yonghong Tian
arXiv preprint, 3 Sep 2024
Abstract
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-view captures restricts their broader applicability. In this work, we propose \textbf{ViewCrafter}, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images with the prior of video diffusion model. Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames with precise camera pose control. To further enlarge the generation range of novel views, we tailored an iterative view synthesis strategy together with a camera trajectory planning algorithm to progressively extend the 3D clues and the areas covered by the novel views. With ViewCrafter, we can facilitate various applications, such as immersive experiences with real-time rendering by efficiently optimizing a 3D-GS representation using the reconstructed 3D points and the generated novel views, and scene-level text-to-3D generation for more imaginative content creation. Extensive experiments on diverse datasets demonstrate the strong generalization capability and superior performance of our method in synthesizing high-fidelity and consistent novel views.[arXiv] [Project] [Video] [Code]
LM-Gaussian: Boost Sparse-view 3D Gaussian Splatting with Large Model Priors
Hanyang Yu, Xiaoxiao Long, Ping Tan
arXiv preprint, 5 Sep 2024
[arXiv] [Project] [Video] [Code]
Optimizing 3D Gaussian Splatting for Sparse Viewpoint Scene Reconstruction
Shen Chen, Jiale Zhou, Lei Li
arXiv preprint, 5 Sep 2024
[arXiv]
Object Gaussian for Monocular 6D Pose Estimation from Sparse Views
Luqing Luo, Shichu Sun, Jiangang Yang, Linfang Zheng, Jinwei Du, Jian Liu
arXiv preprint, 4 Sep 2024
[arXiv]
Single-View 3D Reconstruction via SO(2)-Equivariant Gaussian Sculpting Networks
Ruihan Xu, Anthony Opipari, Joshua Mah, Stanley Lewis, Haoran Zhang, Hanzhe Guo, Odest Chadwicke Jenkins
RSS 2024, 11 Sep 2024
[arXiv]
Vista3D: Unravel the 3D Darkside of a Single Image
Qiuhong Shen, Xingyi Yang, Michael Bi Mi, Xinchao Wang
ECCV 2024, 18 Sep 2024
[arXiv] [Code]
MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views
Wangze Xu, Huachen Gao, Shihe Shen, Rui Peng, Jianbo Jiao, Ronggang Wang
ECCV 2024, 22 Sep 2024
[arXiv] [Project] [Code]
HiSplat: Hierarchical 3D Gaussian Splatting for Generalizable Sparse-View Reconstruction
Shengji Tang, Weicai Ye, Peng Ye, Weihao Lin, Yang Zhou, Tao Chen, Wanli Ouyang
arXiv preprint, 8 Oct 2024
[arXiv] [Project] [Code]
MCGS: Multiview Consistency Enhancement for Sparse-View 3D Gaussian Radiance Fields
Yuru Xiao, Deming Zhai, Wenbo Zhao, Kui Jiang, Junjun Jiang, Xianming Liu
arXiv preprint, 15 Oct 2024
[arXiv]
Few-shot Novel View Synthesis using Depth Aware 3D Gaussian Splatting
Raja Kumar, Vanshika Vats
ECCV 2024 Workshop S3DSGR, 14 Oct 2024
[arXiv] [Code]
3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors
Xi Liu, Chaoyi Zhou, Siyu Huang
NeurIPS 2024, 21 Oct 2024
[arXiv] [Project]
Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis
Liang Han, Junsheng Zhou, Yu-Shen Liu, Zhizhong Han
NeurIPS 2024, 24 Oct 2024
[arXiv] [Project] [Code]
Structure Consistent Gaussian Splatting with Matching Prior for Few-shot Novel View Synthesis
Rui Peng, Wangze Xu, Luyang Tang, Liwei Liao, Jianbo Jiao, Ronggang Wang
NeurIPS 2024, 6 Nov 2024
[arXiv] [Code]
FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training
Ruihong Yin, Vladimir Yugay, Yue Li, Sezer Karaoglu, Theo Gevers
NeurIPS 2024, 4 Nov 2024
[arXiv]
GBR: Generative Bundle Refinement for High-fidelity Gaussian Splatting and Meshing
Jianing Zhang, Yuchao Zheng, Ziwei Li, Qionghai Dai, Xiaoyun Yuan
8 Dec 2024
[arXiv]
TSGaussian: Semantic and Depth-Guided Target-Specific Gaussian Splatting from Sparse Views
Liang Zhao, Zehan Bao, Yi Xie, Hong Chen, Yaohui Chen, Weifu Li
13 Dec 2024
[arXiv] [Code]
SolidGS: Consolidating Gaussian Surfel Splatting for Sparse-View Surface Reconstruction
Zhuowen Shen, Yuan Liu, Zhang Chen, Zhong Li, Jiepeng Wang, Yongqing Liang, Zhengming Yu, Jingdong Zhang, Yi Xu, Scott Schaefer, Xin Li, Wenping Wang
19 Dec 2024
[arXiv] [Project] )]
Improving Geometry in Sparse-View 3DGS via Reprojection-based DoF Separation
Yongsung Kim, Minjun Park, Jooyoung Choi, Sungroh Yoon
19 Dec 2024
[arXiv]
COLMAP-Free 3D Gaussian Splatting
Yang Fu, Sifei Liu, Amey Kulkarni, Jan Kautz, Alexei A. Efros, Xiaolong Wang
CVPR 2024, 12 Dec 2023
[arXiv] [Project] [Video]
iComMa: Inverting 3D Gaussians Splatting for Camera Pose Estimation via Comparing and Matching
Yuan Sun, Xuan Wang, Yunfan Zhang, Jie Zhang, Caigui Jiang, Yu Guo, Fei Wang
arXiv preprint, 14 Dec 2023
[arXiv]
A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose
Kaiwen Jiang, Yang Fu, Mukund Varma T, Yash Belhe, Xiaolong Wang, Hao Su, Ravi Ramamoorthi
arXiv preprint, 6 May 2024
[arXiv]
🔥6DGS: 6D Pose Estimation from a Single Image and a 3D Gaussian Splatting Model
Matteo Bortolon, Theodore Tsesmelis, Stuart James, Fabio Poiesi, Alessio Del Bue
ECCV 2024, 22 Jul 2024
Abstract
We propose 6DGS to estimate the camera pose of a target RGB image given a 3D Gaussian Splatting (3DGS) model representing the scene. 6DGS avoids the iterative process typical of analysis-by-synthesis methods (e.g. iNeRF) that also require an initialization of the camera pose in order to converge. Instead, our method estimates a 6DoF pose by inverting the 3DGS rendering process. Starting from the object surface, we define a radiant Ellicell that uniformly generates rays departing from each ellipsoid that parameterize the 3DGS model. Each Ellicell ray is associated with the rendering parameters of each ellipsoid, which in turn is used to obtain the best bindings between the target image pixels and the cast rays. These pixel-ray bindings are then ranked to select the best scoring bundle of rays, which their intersection provides the camera center and, in turn, the camera rotation. The proposed solution obviates the necessity of an "a priori" pose for initialization, and it solves 6DoF pose estimation in closed form, without the need for iterations. Moreover, compared to the existing Novel View Synthesis (NVS) baselines for pose estimation, 6DGS can improve the overall average rotational accuracy by 12% and translation accuracy by 22% on real scenes, despite not requiring any initialization pose. At the same time, our method operates near real-time, reaching 15fps on consumer hardware.[arXiv] [Project] [Code] [Video]
GSLoc: Efficient Camera Pose Refinement via 3D Gaussian Splatting
Changkun Liu, Shuai Chen, Yash Bhalgat, Siyan Hu, Zirui Wang, Ming Cheng, Victor Adrian Prisacariu, Tristan Braud
arXiv preprint, 20 Aug 2024
[arXiv]
Splatt3R: Zero-shot Gaussian Splatting from Uncalibrated Image Pairs
Brandon Smart, Chuanxia Zheng, Iro Laina, Victor Adrian Prisacariu
arXiv preprint, 25 Aug 2024
[arXiv] [Project] [Code]
HGSLoc: 3DGS-based Heuristic Camera Pose Refinement
Zhongyan Niu, Zhen Tan
arXiv preprint, 17 Sep 2024
[arXiv]
GSplatLoc: Grounding Keypoint Descriptors into 3D Gaussian Splatting for Improved Visual Localization
Gennady Sidorov, Malik Mohrat, Ksenia Lebedeva, Ruslan Rakhimov, Sergey Kolyubin
arXiv preprint, 24 Sep 2024
[arXiv] [Project] [Video] [Code]
SplatLoc: 3D Gaussian Splatting-based Visual Localization for Augmented Reality
Hongjia Zhai, Xiyu Zhang, Boming Zhao, Hai Li, Yijia He, Zhaopeng Cui, Hujun Bao, Guofeng Zhang
arXiv preprint, 21 Sep 2024
[arXiv] [Project] [Code]
Generating 3D-Consistent Videos from Unposed Internet Photos
Gene Chou, Kai Zhang, Sai Bi, Hao Tan, Zexiang Xu, Fujun Luan, Bharath Hariharan, Noah Snavely
arXiv preprint, 20 Nov 2024
[arXiv]
ZeroGS: Training 3D Gaussian Splatting from Unposed Images
Yu Chen, Rolandos Alexandros Potamias, Evangelos Ververas, Jifei Song, Jiankang Deng, Gim Hee Lee
24 Nov 2024
[arXiv] [Project] [Code]
SfM-Free 3D Gaussian Splatting via Hierarchical Training
Bo Ji, Angela Yao
2 Dec 2024
[arXiv] [Code]
DynSUP: Dynamic Gaussian Splatting from An Unposed Image Pair
Weihang Li, Weirong Chen, Shenhan Qian, Jiajie Chen, Daniel Cremers, Haoang Li
1 Sec 2024
[arXiv] [Project]
Object Pose Estimation Using Implicit Representation For Transparent Objects
Varun Burde, Artem Moroz, Vit Zeman, Pavel Burget
arXiv preprint, 17 Oct 2024
[arXiv]
GS2Pose: Tow-stage 6D Object Pose Estimation Guided by Gaussian Splatting
Jilan Mei, Junbo Li, Cai Meng
arXiv preprint, 6 Nov 2024
[arXiv]
GSGTrack: Gaussian Splatting-Guided Object Pose Tracking from RGB Videos
Zhiyuan Chen, Fan Lu, Guo Yu, Bin Li, Sanqing Qu, Yuan Huang, Changhong Fu, Guang Chen
3 Dec 2024
[arXiv]
6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting
Yufeng Jin, Vignesh Prasad, Snehal Jauhri, Mathias Franzius, Georgia Chalvatzaki
2 Dec 2024
[arXiv]
GFreeDet: Exploiting Gaussian Splatting and Foundation Models for Model-free Unseen Object Detection in the BOP Challenge 2024
Xingyu Liu, Yingyue Li, Chengxi Li, Gu Wang, Chenyangguang Zhang, Ziqin Huang, Xiangyang Ji
2 Dec 2024
[arXiv]
NeRFs to Gaussian Splats, and Back
Siming He, Zach Osman, Pratik Chaudhari
arXiv preprint, 15 May 2024
[arXiv] [Code]
GGRt: Towards Generalizable 3D Gaussians without Pose Priors in Real-Time
Hao Li, Yuanyuan Gao, Dingwen Zhang, Chenming Wu, Yalun Dai, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han
arXiv preprint, 15 Mar 2024
[arXiv] [Project]
latentSplat: Autoencoding Variational Gaussians for Fast Generalizable 3D Reconstruction
Christopher Wewer, Kevin Raj, Eddy Ilg, Bernt Schiele, Jan Eric Lenssen
arXiv preprint, 24 Mar 2024
[arXiv] [Project]
Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo
Tianqi Liu, Guangcong Wang, Shoukang Hu, Liao Shen, Xinyi Ye, Yuhang Zang, Zhiguo Cao, Wei Li, Ziwei Liu
arXiv preprint, 20 May 2024
[arXiv] [Project] [Code] [Video]
GS-Net: Generalizable Plug-and-Play 3D Gaussian Splatting Module
Yichen Zhang, Zihan Wang, Jiali Han, Peilin Li, Jiaxun Zhang, Jianqiang Wang, Lei He, Keqiang Li
arXiv preprint, 17 Sep 2024
[arXiv]
DepthSplat: Connecting Gaussian Splatting and Depth
Haofei Xu, Songyou Peng, Fangjinhua Wang, Hermann Blum, Daniel Barath, Andreas Geiger, Marc Pollefeys
arXiv preprint, 17 Oct 2024
[arXiv] [Project] [Code]
Epipolar-Free 3D Gaussian Splatting for Generalizable Novel View Synthesis
Zhiyuan Min, Yawei Luo, Jianwen Sun, Yi Yang
NeurIPS 2024, 30 Oct 2024
[arXiv] [Project]
GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views
Boyao Zhou, Shunyuan Zheng, Hanzhang Tu, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
CVPR 2024, 18 Nov 2024
[arXiv] [Project]
SmileSplat: Generalizable Gaussian Splats for Unconstrained Sparse Images
Yanyan Li, Yixin Fang, Federico Tombari, Gim Hee Lee
27 Nov 2024
[arXiv]
Distractor-free Generalizable 3D Gaussian Splatting
Yanqi Bao, Jing Liao, Jing Huo, Yang Gao
26 Nov 2024
[arXiv] [Code]
SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian Splatting
Gyeongjin Kang, Jisang Yoo, Jihyeon Park, Seungtae Nam, Hyeonsoo Im, Sangheon Shin, Sangpil Kim, Eunbyung Park
26 Nov 2024
[arXiv] [Project] [Code]
**Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction
**
Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park
9 Dec 2024
[arXiv] [Project] [Code]
Splatter-360: Generalizable 360∘ Gaussian Splatting for Wide-baseline Panoramic Images
Zheng Chen, Chenming Wu, Zhelun Shen, Chen Zhao, Weicai Ye, Haocheng Feng, Errui Ding, Song-Hai Zhang
9 Dec 2024
[arXiv] [Project] [Code]
GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency
Dongyue Lu, Lingdong Kong, Tianxin Huang, Gim Hee Lee
12 Dec 2024
[arXiv] [Project] [Code]
DUSt3R: Geometric 3D Vision Made Easy
Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jerome Revaud
21 Dec 2023
Abstract
Multi-view stereo reconstruction (MVS) in the wild requires to first estimate the camera parameters e.g. intrinsic and extrinsic parameters. These are usually tedious and cumbersome to obtain, yet they are mandatory to triangulate corresponding pixels in 3D space, which is the core of all best performing MVS algorithms. In this work, we take an opposite stance and introduce DUSt3R, a radically novel paradigm for Dense and Unconstrained Stereo 3D Reconstruction of arbitrary image collections, i.e. operating without prior information about camera calibration nor viewpoint poses. We cast the pairwise reconstruction problem as a regression of pointmaps, relaxing the hard constraints of usual projective camera models. We show that this formulation smoothly unifies the monocular and binocular reconstruction cases. In the case where more than two images are provided, we further propose a simple yet effective global alignment strategy that expresses all pairwise pointmaps in a common reference frame. We base our network architecture on standard Transformer encoders and decoders, allowing us to leverage powerful pretrained models. Our formulation directly provides a 3D model of the scene as well as depth information, but interestingly, we can seamlessly recover from it, pixel matches, relative and absolute camera. Exhaustive experiments on all these tasks showcase that the proposed DUSt3R can unify various 3D vision tasks and set new SoTAs on monocular/multi-view depth estimation as well as relative pose estimation. In summary, DUSt3R makes many geometric 3D vision tasks easy.[arXiv]
Flash3D: Feed-Forward Generalisable 3D Scene Reconstruction from a Single Image
Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F. Henriques, Christian Rupprecht, Andrea Vedaldi
arXiv preprint, 6 Jun 2024
[arXiv] [Project]
PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting
Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jisang Han, Jiaolong Yang, Chong Luo, Seungryong Kim
arXiv preprint, 29 Oct 2024
[arXiv] [Project] [Code]
🔥No Pose, No Problem: Surprisingly Simple 3D Gaussian Splats from Sparse Unposed Images
Botao Ye, Sifei Liu, Haofei Xu, Xueting Li, Marc Pollefeys, Ming-Hsuan Yang, Songyou Peng
arXiv preprint, 31 Oct 2024
Abstract
We introduce NoPoSplat, a feed-forward model capable of reconstructing 3D scenes parameterized by 3D Gaussians from \textit{unposed} sparse multi-view images. Our model, trained exclusively with photometric loss, achieves real-time 3D Gaussian reconstruction during inference. To eliminate the need for accurate pose input during reconstruction, we anchor one input view's local camera coordinates as the canonical space and train the network to predict Gaussian primitives for all views within this space. This approach obviates the need to transform Gaussian primitives from local coordinates into a global coordinate system, thus avoiding errors associated with per-frame Gaussians and pose estimation. To resolve scale ambiguity, we design and compare various intrinsic embedding methods, ultimately opting to convert camera intrinsics into a token embedding and concatenate it with image tokens as input to the model, enabling accurate scene scale prediction. We utilize the reconstructed 3D Gaussians for novel view synthesis and pose estimation tasks and propose a two-stage coarse-to-fine pipeline for accurate pose estimation. Experimental results demonstrate that our pose-free approach can achieve superior novel view synthesis quality compared to pose-required methods, particularly in scenarios with limited input image overlap. For pose estimation, our method, trained without ground truth depth or explicit matching loss, significantly outperforms the state-of-the-art methods with substantial improvements. This work makes significant advances in pose-free generalizable 3D reconstruction and demonstrates its applicability to real-world scenarios. Code and trained models are available at this https URL.MVSplat360: Feed-Forward 360 Scene Synthesis from Sparse Views
Yuedong Chen, Chuanxia Zheng, Haofei Xu, Bohan Zhuang, Andrea Vedaldi, Tat-Jen Cham, Jianfei Cai
NeurIPS 2024, 7 Nov 2024
[arXiv] [Project] [Code]
NovelGS: Consistent Novel-view Denoising via Large Gaussian Reconstruction Model
Jinpeng Liu, Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Ying Shan, Yansong Tang
25 Nov 2024
[arXiv]
PreF3R: Pose-Free Feed-Forward 3D Gaussian Splatting from Variable-length Image Sequence
Zequn Chen, Jiezhi Yang, Heng Yang
25 Nov 2024
[arXiv] [Project] [Code]
Wonderland: Navigating 3D Scenes from a Single Image
Hanwen Liang, Junli Cao, Vidit Goel, Guocheng Qian, Sergei Korolev, Demetri Terzopoulos, Konstantinos N. Plataniotis, Sergey Tulyakov, Jian Ren
16 Dec 2024
[arXiv] [Project]
PanSplat: 4K Panorama Synthesis with Feed-Forward Gaussian Splatting
Cheng Zhang, Haofei Xu, Qianyi Wu, Camilo Cruz Gambardella, Dinh Phung, Jianfei Cai
16 Dec 2024
[arXiv] [Code]
MV-DUSt3R+: Single-Stage Scene Reconstruction from Sparse Views In 2 Seconds
Zhenggang Tang, Yuchen Fan, Dilin Wang, Hongyu Xu, Rakesh Ranjan, Alexander Schwing, Zhicheng Yan
9 Dec 2024
[arXiv]
FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction
Jiale Xu, Shenghua Gao, Ying Shan
12 Dec 2024
[arXiv] [Project] [Code]
LiftImage3D: Lifting Any Single Image to 3D Gaussians with Video Generation Priors
Yabo Chen, Chen Yang, Jiemin Fang, Xiaopeng Zhang, Lingxi Xie, Wei Shen, Wenrui Dai, Hongkai Xiong, Qi Tian
12 Dec 2024
[arXiv] [Project]
CATSplat: Context-Aware Transformer with Spatial Guidance for Generalizable 3D Gaussian Splatting from A Single-View Image
Wonseok Roh, Hwanhee Jung, Jong Wook Kim, Seunggwan Lee, Innfarn Yoo, Andreas Lugmayr, Seunggeun Chi, Karthik Ramani, Sangpil Kim
17 Dec 2024
[arXiv]
OmniSplat: Taming Feed-Forward 3D Gaussian Splatting for Omnidirectional Images with Editable Capabilities
Suyoung Lee, Jaeyoung Chung, Kihoon Kim, Jaeyoo Huh, Gunhee Lee, Minsoo Lee, Kyoung Mu Lee
21 Dec 2024
[arXiv]
360-GS: Layout-guided Panoramic Gaussian Splatting For Indoor Roaming
Jiayang Bai, Letian Huang, Jie Guo, Wen Gong, Yuanqi Li, Yanwen Guo
arXiv preprint, 1 Feb 2024
[arXiv]
MonoSelfRecon: Purely Self-Supervised Explicit Generalizable 3D Reconstruction of Indoor Scenes from Monocular RGB Views
Runfa Li, Upal Mahbub, Vasudev Bhaskaran, Truong Nguyen
arXiv preprint, 10 Apr 2024
[arXiv]
FreeSplat: Generalizable 3D Gaussian Splatting Towards Free-View Synthesis of Indoor Scenes
Yunsong Wang, Tianxin Huang, Hanlin Chen, Gim Hee Lee
arXiv preprint, 28 May 2024
[arXiv]
Scalable Indoor Novel-View Synthesis using Drone-Captured 360 Imagery with 3D Gaussian Splatting
Yuanbo Chen, Chengyu Zhang, Jason Wang, Xuefan Gao, Avideh Zakhor
ECCV 2024 Workshop S3DSGR, 15 Oct 2024
[arXiv]
2DGS-Room: Seed-Guided 2D Gaussian Splatting with Geometric Constrains for High-Fidelity Indoor Scene Reconstruction
Wanting Zhang, Haodong Xiang, Zhichao Liao, Xiansong Lai, Xinghui Li, Long Zeng
4 Dec 2024
[arXiv]
SWAG: Splatting in the Wild images with Appearance-conditioned Gaussians
Hiba Dahmani, Moussab Bennehar, Nathan Piasco, Luis Roldao, Dzmitry Tsishkou
arXiv preprint, 15 Mar 2024
[arXiv]
Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections
Dongbin Zhang, Chuming Wang, Weitao Wang, Peihao Li, Minghan Qin, Haoqian Wang
arXiv preprint, 23 Mar 2024
[arXiv]
WE-GS: An In-the-wild Efficient 3D Gaussian Representation for Unconstrained Photo Collections
Yuze Wang, Junyi Wang, Yue Qi
arXiv preprint, 4 Jun 2024
[arXiv] [Project]
Wild-GS: Real-Time Novel View Synthesis from Unconstrained Photo Collections
Jiacong Xu, Yiqun Mei, Vishal M. Patel
arXiv preprint, 14 Jun 2024
[arXiv]
🔥WildGaussians: 3D Gaussian Splatting in the Wild
Jonas Kulhanek, Songyou Peng, Zuzana Kukelova, Marc Pollefeys, Torsten Sattler
NeurIPS 2024, 11 Jul 2024
Abstract
While the field of 3D scene reconstruction is dominated by NeRFs due to their photorealistic quality, 3D Gaussian Splatting (3DGS) has recently emerged, offering similar quality with real-time rendering speeds. However, both methods primarily excel with well-controlled 3D scenes, while in-the-wild data - characterized by occlusions, dynamic objects, and varying illumination - remains challenging. NeRFs can adapt to such conditions easily through per-image embedding vectors, but 3DGS struggles due to its explicit representation and lack of shared parameters. To address this, we introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS. By leveraging robust DINO features and integrating an appearance modeling module within 3DGS, our method achieves state-of-the-art results. We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data, all within a simple architectural framework.Periodic Vibration Gaussian: Dynamic Urban Scene Reconstruction and Real-time Rendering
Yurui Chen, Chun Gu, Junzhe Jiang, Xiatian Zhu, Li Zhang
arXiv preprint, 30 Nov 2023
[arXiv] [Project]
GauU-Scene: A Scene Reconstruction Benchmark on Large Scale 3D Reconstruction Dataset Using Gaussian Splatting
Butian Xiong, Zhuo Li, Zhen Li
arXiv preprint, 25 Jan 2024
[arXiv]
GaussianPro: 3D Gaussian Splatting with Progressive Propagation
Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, Xuejin Chen
arXiv preprint, 22 Feb 2024
[arXiv] [Project] [Code]
🔥VastGaussian: Vast 3D Gaussians for Large Scene
Jiaqi Lin, Zhihao Li, Xiao Tang, Jianzhuang Liu, Shiyong Liu, Jiayue Liu, Yangdi Lu, Xiaofei Wu, Songcen Xu, Youliang Yan, Wenming Yang
CVPR 2024, 27 Feb, 2024
Abstract
Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations. To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting. We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization. We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images. Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.🔥Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
Kerui Ren, Lihan Jiang, Tao Lu, Mulin Yu, Linning Xu, Zhangkai Ni, Bo Dai
arXiv preprint, 26 Mar 2024
Abstract
The recent 3D Gaussian splatting (3D-GS) has shown remarkable rendering fidelity and efficiency compared to NeRF-based neural scene representations. While demonstrating the potential for real-time rendering, 3D-GS encounters rendering bottlenecks in large scenes with complex details due to an excessive number of Gaussian primitives located within the viewing frustum. This limitation is particularly noticeable in zoom-out views and can lead to inconsistent rendering speeds in scenes with varying details. Moreover, it often struggles to capture the corresponding level of details at different scales with its heuristic density control operation. Inspired by the Level-of-Detail (LOD) techniques, we introduce Octree-GS, featuring an LOD-structured 3D Gaussian approach supporting level-of-detail decomposition for scene representation that contributes to the final rendering results. Our model dynamically selects the appropriate level from the set of multi-resolution anchor points, ensuring consistent rendering performance with adaptive LOD adjustments while maintaining high-fidelity rendering results.SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior
Zhongrui Yu, Haoran Wang, Jinze Yang, Hanzhang Wang, Zeke Xie, Yunfeng Cai, Jiale Cao, Zhong Ji, Mingming Sun
arXiv preprint, 29 Mar 2024
[arXiv]
HO-Gaussian: Hybrid Optimization of 3D Gaussian Splatting for Urban Scenes
Zhuopeng Li, Yilin Zhang, Chenming Wu, Jianke Zhu, Liangjun Zhang
arXiv preprint, 29 Mar 2024
[arXiv]
🔥CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians
Yang Liu, He Guan, Chuanchen Luo, Lue Fan, Junran Peng, Zhaoxiang Zhang
ECCV 2024, 1 Apr 2024
Abstract
The advancement of real-time 3D scene reconstruction and novel view synthesis has been significantly propelled by 3D Gaussian Splatting (3DGS). However, effectively training large-scale 3DGS and rendering it in real-time across various scales remains challenging. This paper introduces CityGaussian (CityGS), which employs a novel divide-and-conquer training approach and Level-of-Detail (LoD) strategy for efficient large-scale 3DGS training and rendering. Specifically, the global scene prior and adaptive training data selection enables efficient training and seamless fusion. Based on fused Gaussian primitives, we generate different detail levels through compression, and realize fast rendering across various scales through the proposed block-wise detail levels selection and aggregation strategy. Extensive experimental results on large-scale scenes demonstrate that our approach attains state-of-theart rendering quality, enabling consistent real-time rendering of largescale scenes across vastly different scales. Our project page is available at this https URL.LetsGo: Large-Scale Garage Modeling and Rendering via LiDAR-Assisted Gaussian Primitives
Jiadi Cui, Junming Cao, Yuhui Zhong, Liao Wang, Fuqiang Zhao, Penghao Wang, Yifan Chen, Zhipeng He, Lan Xu, Yujiao Shi, Yingliang Zhang, Jingyi Yu
arXiv preprint, 15 Apr 2024
[arXiv] [Project]
GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting
Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, Zexiang Xu
arXiv preprint, 30 Apr 2024
[arXiv] [Project]
DoGaussian: Distributed-Oriented Gaussian Splatting for Large-Scale 3D Reconstruction Via Gaussian Consensus
Yu Chen, Gim Hee Lee
arXiv preprint, 22 May 2024
[arXiv] [Project] [Code]
PyGS: Large-scale Scene Representation with Pyramidal 3D Gaussian Splatting
Zipeng Wang, Dan Xu
arXiv preprint, 27 May 2024
[arXiv]
GaussianFormer: Scene as Gaussians for Vision-Based 3D Semantic Occupancy Prediction
Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie Zhou, Jiwen Lu
arXiv preprint, 27 May 2024
[arXiv] [Code]
3D StreetUnveiler with Semantic-Aware 2DGS
Jingwei Xu, Yikai Wang, Yiqun Zhao, Yanwei Fu, Shenghua Gao
arXiv preprint, 28 May 2024
[arXiv] [Project] [Code]
A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets
Bernhard Kerbl, Andréas Meuleman, Georgios Kopanas, Michael Wimmer, Alexandre Lanvin, George Drettakis
arXiv preprint, 17 Jun 2024
[arXiv] [Project]
VEGS: View Extrapolation of Urban Scenes in 3D Gaussian Splatting using Learned Priors
Sungwon Hwang, Min-Jung Kim, Taewoong Kang, Jayeon Kang, Jaegul Choo
arXiv preprint, 3 Jul 2024
[arXiv] [Project]
FlashGS: Efficient 3D Gaussian Splatting for Large-scale and High-resolution Rendering
Guofeng Feng, Siyan Chen, Rong Fu, Zimu Liao, Yi Wang, Tao Liu, Zhilin Pei, Hengjie Li, Xingcheng Zhang, Bo Dai
arXiv preprint, 15 Aug 2024
[arXiv]
GigaGS: Scaling up Planar-Based 3D Gaussians for Large Scene Surface Reconstruction
Junyi Chen, Weicai Ye, Yifan Wang, Danpeng Chen, Di Huang, Wanli Ouyang, Guofeng Zhang, Yu Qiao, Tong He
arXiv preprint, 10 Sep 2024
[arXiv]
LI-GS: Gaussian Splatting with LiDAR Incorporated for Accurate Large-Scale Reconstruction
Changjian Jiang, Ruilan Gao, Kele Shao, Yue Wang, Rong Xiong, Yu Zhang
arXiv preprint, 19 Sep 2024
[arXiv]
GaRField++: Reinforced Gaussian Radiance Fields for Large-Scale 3D Scene Reconstruction
Hanyue Zhang, Zhiliu Yang, Xinhe Zuo, Yuxin Tong, Ying Long, Chen Liu
arXiv preprint, 19 Sep 2024
[arXiv]
DENSER: 3D Gaussians Splatting for Scene Reconstruction of Dynamic Urban Environments
Mahmud A. Mohamad, Gamal Elghazaly, Arthur Hubert, Raphael Frank
arXiv preprint, 16 Sep 2024
[arXiv] [Code]
StreetSurfGS: Scalable Urban Street Surface Reconstruction with Planar-based Gaussian Splatting
Xiao Cui, Weicai Ye, Yifan Wang, Guofeng Zhang, Wengang Zhou, Tong He, Houqiang Li
arXiv preprint, 6 Oct 2024
[arXiv]
Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats
Chen Ziwen, Hao Tan, Kai Zhang, Sai Bi, Fujun Luan, Yicong Hong, Li Fuxin, Zexiang Xu
arXiv preprint, 16 Oct 2024
[arXiv] [Project]
🔥SCube: Instant Large-Scale Scene Reconstruction using VoxSplats
Xuanchi Ren, Yifan Lu, Hanxue Liang, Zhangjie Wu, Huan Ling, Mike Chen, Sanja Fidler, Francis Williams, Jiahui Huang
NeurIPS 2024, 26 Oct 2024
Abstract
We present SCube, a novel method for reconstructing large-scale 3D scenes (geometry, appearance, and semantics) from a sparse set of posed images. Our method encodes reconstructed scenes using a novel representation VoxSplat, which is a set of 3D Gaussians supported on a high-resolution sparse-voxel scaffold. To reconstruct a VoxSplat from images, we employ a hierarchical voxel latent diffusion model conditioned on the input images followed by a feedforward appearance prediction model. The diffusion model generates high-resolution grids progressively in a coarse-to-fine manner, and the appearance network predicts a set of Gaussians within each voxel. From as few as 3 non-overlapping input images, SCube can generate millions of Gaussians with a 1024^3 voxel grid spanning hundreds of meters in 20 seconds. Past works tackling scene reconstruction from images either rely on per-scene optimization and fail to reconstruct the scene away from input views (thus requiring dense view coverage as input) or leverage geometric priors based on low-resolution models, which produce blurry results. In contrast, SCube leverages high-resolution sparse networks and produces sharp outputs from few views. We show the superiority of SCube compared to prior art using the Waymo self-driving dataset on 3D reconstruction and demonstrate its applications, such as LiDAR simulation and text-to-scene generation.[arXiv] [Project] [Code] [Video]
ULSR-GS: Ultra Large-scale Surface Reconstruction Gaussian Splatting with Multi-View Geometric Consistency
Zhuoxiao Li, Shanliang Yao, Qizhong Gao, Angel F. Garcia-Fernandez, Yong Yue, Xiaohui Zhu
2 Dec 2024
[arXiv] [Project]
Momentum-GS: Momentum Gaussian Self-Distillation for High-Quality Large Scene Reconstruction
Jixuan Fan, Wanhua Li, Yifei Han, Yansong Tang
6 Dec 2024
[arXiv] [Project] [Code]
Radiant: Large-scale 3D Gaussian Rendering based on Hierarchical Framework
Haosong Peng, Tianyu Qi, Yufeng Zhan, Hao Li, Yalun Dai, Yuanqing Xia
7 Dec 2024
[arXiv]
Proc-GS: Procedural Building Generation for City Assembly with 3D Gaussians
Yixuan Li, Xingjian Ran, Linning Xu, Tao Lu, Mulin Yu, Zhenzhi Wang, Yuanbo Xiangli, Dahua Lin, Bo Dai
10 Dec 2024
[arXiv] [Project] [Code]
CoSurfGS:Collaborative 3D Surface Gaussian Splatting with Distributed Learning for Large Scene Reconstruction
Yuanyuan Gao, Yalun Dai, Hao Li, Weicai Ye, Junyi Chen, Danpeng Chen, Dingwen Zhang, Tong He, Guofeng Zhang, Junwei Han
23 Dec 2024
[arXiv] [Project]
DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes
Xiaoyu Zhou, Zhiwei Lin, Xiaojun Shan, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang
CVPR 2024, 13 Dec 2023
[arXiv] [Code]
Street Gaussians for Modeling Dynamic Urban Scenes
Yunzhi Yan, Haotong Lin, Chenxu Zhou, Weijie Wang, Haiyang Sun, Kun Zhan, Xianpeng Lang, Xiaowei Zhou, Sida Peng
arXiv preprint, 2 Jan 2024
[arXiv] [Project] [Code]
TCLC-GS: Tightly Coupled LiDAR-Camera Gaussian Splatting for Surrounding Autonomous Driving Scenes
Cheng Zhao, Su Sun, Ruoyu Wang, Yuliang Guo, Jun-Jun Wan, Zhou Huang, Xinyu Huang, Yingjie Victor Chen, Liu Ren
arXiv preprint, 3 Apr, 2024
[arXiv]
S^3 Gaussian: Self-Supervised Street Gaussians for Autonomous Driving
Nan Huang, Xiaobao Wei, Wenzhao Zheng, Pengju An, Ming Lu, Wei Zhan, Masayoshi Tomizuka, Kurt Keutzer, Shanghang Zhang
arXiv preprint, 30 May 2024
[arXiv] [Code]
VDG: Vision-Only Dynamic Gaussian for Driving Simulation
Hao Li, Jingfeng Li, Dingwen Zhang, Chenming Wu, Jieqi Shi, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Junwei Han
arXiv preprint, 26 Jun 2024
[arXiv] [Project]
AutoSplat: Constrained Gaussian Splatting for Autonomous Driving Scene Reconstruction
Mustafa Khan, Hamidreza Fazlali, Dhruv Sharma, Tongtong Cao, Dongfeng Bai, Yuan Ren, Bingbing Liu
arXiv preprint, 2 Jul 2024
[arXiv] [Project]
DHGS: Decoupled Hybrid Gaussian Splatting for Driving Scene
Xi Shi, Lingli Chen, Peng Wei, Xi Wu, Tian Jiang, Yonggang Luo, Lecheng Xie
arXiv preprint, 23 Jul 2024
[arXiv] [Project]
GaussianBeV: 3D Gaussian Representation meets Perception Models for BeV Segmentation
Florian Chabot, Nicolas Granger, Guillaume Lapouge
arXiv preprint, 19 Jul 2024
[arXiv]
🔥OmniRe: Omni Urban Scene Reconstruction
Ziyu Chen, Jiawei Yang, Jiahui Huang, Riccardo de Lutio, Janick Martinez Esturo, Boris Ivanovic, Or Litany, Zan Gojcic, Sanja Fidler, Marco Pavone, Li Song, Yue Wang
arXiv preprint, 29 Aug 2024
Abstract
We introduce OmniRe, a holistic approach for efficiently reconstructing high-fidelity dynamic urban scenes from on-device logs. Recent methods for modeling driving sequences using neural radiance fields or Gaussian Splatting have demonstrated the potential of reconstructing challenging dynamic scenes, but often overlook pedestrians and other non-vehicle dynamic actors, hindering a complete pipeline for dynamic urban scene reconstruction. To that end, we propose a comprehensive 3DGS framework for driving scenes, named OmniRe, that allows for accurate, full-length reconstruction of diverse dynamic objects in a driving log. OmniRe builds dynamic neural scene graphs based on Gaussian representations and constructs multiple local canonical spaces that model various dynamic actors, including vehicles, pedestrians, and cyclists, among many others. This capability is unmatched by existing methods. OmniRe allows us to holistically reconstruct different objects present in the scene, subsequently enabling the simulation of reconstructed scenarios with all actors participating in real-time (~60Hz). Extensive evaluations on the Waymo dataset show that our approach outperforms prior state-of-the-art methods quantitatively and qualitatively by a large margin. We believe our work fills a critical gap in driving reconstruction.Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty
Saining Zhang, Baijun Ye, Xiaoxue Chen, Yuantao Chen, Zongzheng Zhang, Cheng Peng, Yongliang Shi, Hao Zhao
BMVC 2024, 27 Aug 2024
[arXiv] [Project] [Code]
GGS: Generalizable Gaussian Splatting for Lane Switching in Autonomous Driving
Huasong Han, Kaixuan Zhou, Xiaoxiao Long, Yusen Wang, Chunxia Xiao
arXiv preprint, 4 Sep 2024
[arXiv]
DrivingForward: Feed-forward 3D Gaussian Splatting for Driving Scene Reconstruction from Flexible Surround-view Input
Qijian Tian, Xin Tan, Yuan Xie, Lizhuang Ma
arXiv preprint, 19 Sep 2024
[arXiv] [Project] [Code]
RenderWorld: World Model with Self-Supervised 3D Label
Ziyang Yan, Wenzhen Dong, Yihua Shao, Yuhang Lu, Liu Haiyang, Jingwen Liu, Haozhe Wang, Zhe Wang, Yan Wang, Fabio Remondino, Yuexin Ma
arXiv preprint, 17 Sep 2024
[arXiv]
UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection
Haocheng Zhao, Runwei Guan, Taoyu Wu, Ka Lok Man, Limin Yu, Yutao Yue
arXiv preprint, 23 Sep 2024
[arXiv]
GSPR: Multimodal Place Recognition Using 3D Gaussian Splatting for Autonomous Driving
Zhangshuo Qi, Junyi Ma, Jingyi Xu, Zijie Zhou, Luqi Cheng, Guangming Xiong
arXiv preprint, 1 Oct 2024
[arXiv]
LiDAR-GS:Real-time LiDAR Re-Simulation using Gaussian Splatting
Qifeng Chen, Sheng Yang, Sicong Du, Tao Tang, Peng Chen, Yuchi Huo
arXiv preprint, 7 Oct 2024
[arXiv]
DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation
Guosheng Zhao, Chaojun Ni, Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Boyuan Wang, Youyi Zhang, Wenjun Mei, Xingang Wang
arXiv preprint, 17 Oct 2024
[arXiv] [Project] [Code]
DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction for Urban Driving Scenes
Chensheng Peng, Chengwei Zhang, Yixiao Wang, Chenfeng Xu, Yichen Xie, Wenzhao Zheng, Kurt Keutzer, Masayoshi Tomizuka, Wei Zhan
18 Nov 2024
[arXiv] [Code]
GaussianPretrain: A Simple Unified 3D Gaussian Representation for Visual Pre-training in Autonomous Driving
Shaoqing Xu, Fang Li, Shengyin Jiang, Ziying Song, Li Liu, Zhi-xin Yang
19 Nov 2024
[arXiv] [Code]
SplatAD: Real-Time Lidar and Camera Rendering with 3D Gaussian Splatting for Autonomous Driving
Georg Hess, Carl Lindström, Maryam Fatemi, Christoffer Petersson, Lennart Svensson
25 Nov 2024
[arXiv] [Project]
EMD: Explicit Motion Modeling for High-Quality Street Gaussian Splatting
Xiaobao Wei, Qingpo Wuwu, Zhongyu Zhao, Zhuangzhe Wu, Nan Huang, Ming Lu, Ningning MA, Shanghang Zhang
23 Nov 2024
[arXiv] [Project]
SplatFlow: Self-Supervised Dynamic Gaussian Splatting in Neural Motion Flow Field for Autonomous Driving
Su Sun, Cheng Zhao, Zhuoyang Sun, Yingjie Victor Chen, Mei Chen
23 Nov 2024
[arXiv]
HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving
Hongyu Zhou, Longzhong Lin, Jiabao Wang, Yichong Lu, Dongfeng Bai, Bingbing Liu, Yue Wang, Andreas Geiger, Yiyi Liao
2 Dec 2024
[arXiv] [Project] [Code]
Driving Scene Synthesis on Free-form Trajectories with Generative Prior
Zeyu Yang, Zijie Pan, Yuankun Yang, Xiatian Zhu, Li Zhang
2 Dec 2024
[arXiv]
GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Qianpu Sun, Changyong Shu, Sifan Zhou, Zichen Yu, Yan Chen, Dawei Yang, Yuan Chun
19 Dec 2024
[arXiv]
EGSRAL: An Enhanced 3D Gaussian Splatting based Renderer with Automated Labeling for Large-Scale Driving Scene
Yixiong Huo, Guangfeng Jiang, Hongyang Wei, Ji Liu, Song Zhang, Han Liu, Xingliang Huang, Mingjie Lu, Jinzhang Peng, Dong Li, Lu Tian, Emad Barsoum
AAAI 2025, 20 Dec 2024
[arXiv]
LiHi-GS: LiDAR-Supervised Gaussian Splatting for Highway Driving Scene Reconstruction
Pou - Chun Kung, Xianling Zhang, Katherine A. Skinner, Nikita Jaipuria
19 Dec 2024
[arXiv]
NeRF-To-Real Tester: Neural Radiance Fields as Test Image Generators for Vision of Autonomous Systems
Laura Weihl, Bilal Wehbe, Andrzej Wąsowski
20 Dec 2024
[arXiv]
GaussianOcc: Fully Self-supervised and Efficient 3D Occupancy Estimation with Gaussian Splatting
Wanshui Gan, Fang Liu, Hongbin Xu, Ningkai Mo, Naoto Yokoya
arXiv preprint, 21 Aug 2024
[arXiv] [Code]
L3DG: Latent 3D Gaussian Diffusion
Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, Matthias Nießner
SIGGRAPH Asis 2024, 17 Oct 2024
[arXiv] [Project] [Video]
A Lesson in Splats: Teacher-Guided Diffusion for 3D Gaussian Splats Generation with 2D Supervision
Chensheng Peng, Ido Sobol, Masayoshi Tomizuka, Kurt Keutzer, Chenfeng Xu, Or Litany
1 Dec 2024
[arXiv]
How to Use Diffusion Priors under Sparse Views?
Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li
3 Dec 2024
[arXiv] [Code]
GaussianDiffusion: 3D Gaussian Splatting for Denoising Diffusion Probabilistic Models with Structured Noise
Xinhai Li, Huaibin Wang, Kuo-Kun Tseng
arXiv preprint, 19 Nov 2023
[arXiv]
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching
Yixun Liang, Xin Yang, Jiantao Lin, Haodong Li, Xiaogang Xu, Yingcong Chen
arXiv preprint, 19 Nov 2023
[arXiv] [Github]
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
Jaeyoung Chung, Suyoung Lee, Hyeongjin Nam, Jaerin Lee, Kyoung Mu Lee
arXiv preprint, 22 Nov 2023
[arXiv] [Project]
DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation
Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, Gang Zeng
arXiv preprint, 28 Sep 2023
[arXiv] [Project] [Github]
Text-to-3D using Gaussian Splatting
Zilong Chen, Feng Wang, Huaping Liu
arXiv preprint, 29 Sep 2023
[arXiv] [Project] [Github]
GaussianDreamer: Fast Generation from Text to 3D Gaussian Splatting with Point Cloud Priors
Taoran Yi, Jiemin Fang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Qi Tian, Xinggang Wang
arXiv preprint, 12 Oct 2023
[arXiv] [Project] [Github]
CG3D: Compositional Generation for Text-to-3D via Gaussian Splatting
Alexander Vilesov, Pradyumna Chari, Achuta Kadambi
arXiv preprint, 29 Nov 2023
[arXiv]
Text2Immersion: Generative Immersive Scene with 3D Gaussians
Hao Ouyang, Kathryn Heal, Stephen Lombardi, Tiancheng Sun
arXiv preprint, 14 Dec 2023
[arXiv] [Project]
Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models
Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, Karsten Kreis
CVPR 2024, 21 Dec 2023
[arXiv] [Project]
4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency
Yuyang Yin, Dejia Xu, Zhangyang Wang, Yao Zhao, Yunchao Wei
arXiv preprint, 28 Dec 2023
[arXiv] [Project] [Code]
DreamGaussian4D: Generative 4D Gaussian Splatting
Jiawei Ren, Liang Pan, Jiaxiang Tang, Chi Zhang, Ang Cao, Gang Zeng, Ziwei Liu
arXiv preprint, 28 Dec 2023
[arXiv] [Project] [Code]
IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation
Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos
arXiv preprint, 13 Feb 2024
[arXiv]
GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting
Xiaoyu Zhou, Xingjian Ran, Yajiao Xiong, Jinlin He, Zhiwei Lin, Yongtao Wang, Deqing Sun, Ming-Hsuan Yang
arXiv preprint, 11 Feb 2024
[arXiv]
GVGEN: Text-to-3D Generation with Volumetric Representation
Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, Tong He
arXiv preprint, 19 Mar 2024
[arXiv] [Project]
BrightDreamer: Generic 3D Gaussian Generative Framework for Fast Text-to-3D Synthesis
Lutao Jiang, Lin Wang
arXiv preprint, 17 Mar 2024
[arXiv] [Project] [Code]
DreamPolisher: Towards High-Quality Text-to-3D Generation via Geometric Diffusion
Yuanze Lin, Ronald Clark, Philip Torr
arXiv preprint, 25 Mar 2024
[arXiv] [Project] [Code]
GaussianCube: Structuring Gaussian Splatting using Optimal Transport for 3D Generative Modeling
Bowen Zhang, Yiji Cheng, Jiaolong Yang, Chunyu Wang, Feng Zhao, Yansong Tang, Dong Chen, Baining Guo
arXiv preprint, 28 Mar 2024
[arXiv] [Prject] [Code]
RealmDreamer: Text-Driven 3D Scene Generation with Inpainting and Depth Diffusion
Jaidev Shriram, Alex Trevithick, Lingjie Liu, Ravi Ramamoorthi
arXiv preprint, 10 Apr 2024
[arXiv] [Project]
DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting
Shijie Zhou, Zhiwen Fan, Dejia Xu, Haoran Chang, Pradyumna Chari, Tejas Bharadwaj, Suya You, Zhangyang Wang, Achuta Kadambi
arXiv preprint, 10 Apr 2024
[arXiv] [Project]
DreamScape: 3D Scene Creation via Gaussian Splatting joint Correlation Modeling
Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang
arXiv preprint, 14 Apr 2024
[arXiv]
DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Sampling
Haoran Li, Haolin Shi, Wenli Zhang, Wenjun Wu, Yong Liao, Lin Wang, Lik-hang Lee, Pengyuan Zhou
arXiv preprint, 4 Apr 2024
[arXiv]
Interactive3D: Create What You Want by Interactive 3D Generation
Shaocong Dong, Lihe Ding, Zhanpeng Huang, Zibin Wang, Tianfan Xue, Dan Xu
arXiv prepring, 25 Apr 2024
[arXiv] [Project] [Code]
FastScene: Text-Driven Fast 3D Indoor Scene Generation via Panoramic Gaussian Splatting
Yikun Ma, Dandan Zhan, Zhi Jin
IJCAI 2024, 9 May 2024
[arXiv]
MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes
Ruiyuan Gao, Kai Chen, Zhihao Li, Lanqing Hong, Zhenguo Li, Qiang Xu
arXiv preprint, 23 May 2024
[arXiv]
Dreamer XL: Towards High-Resolution Text-to-3D Generation via Trajectory Score Matching
Xingyu Miao, Haoran Duan, Varun Ojha, Jun Song, Tejal Shah, Yang Long, Rajiv Ranjan
arXiv preprint, 18 May 2024
[arXiv] [Code]
EG4D: Explicit Generation of 4D Object without Score Distillation
Qi Sun, Zhiyang Guo, Ziyu Wan, Jing Nathan Yan, Shengming Yin, Wengang Zhou, Jing Liao, Houqiang Li
arXiv preprint, 28 May 2024
[arXiv] [Code]
PLA4D: Pixel-Level Alignments for Text-to-4D Gaussian Splatting
Qiaowei Miao, Yawei Luo, Yi Yang
arXiv preprint, 30 May 2024
[arXiv] [Project]
Adversarial Generation of Hierarchical Gaussians for 3D Generative Model
Sangeek Hyun, Jae-Pil Heo
arXiv preprint, 5 Jun 2024
[arXiv] [Project]
Physics3D: Learning Physical Properties of 3D Gaussians via Video Diffusion
Fangfu Liu, Hanyang Wang, Shunyu Yao, Shengjun Zhang, Jie Zhou, Yueqi Duan
arXiv preprint, 6 Jun 2024
[arXiv] [Project]
GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation
Haozhe Xie, Zhaoxi Chen, Fangzhou Hong, Ziwei Liu
arXiv preprint, 10 Jun 2024
[arXiv]
MVGamba: Unify 3D Content Generation as State Space Sequence Modeling
Xuanyu Yi, Zike Wu, Qiuhong Shen, Qingshan Xu, Pan Zhou, Joo-Hwee Lim, Shuicheng Yan, Xinchao Wang, Hanwang Zhang
arXiv preprint, 10 Jun 2024
[arXiv]
L4GM: Large 4D Gaussian Reconstruction Model
Jiawei Ren, Kevin Xie, Ashkan Mirzaei, Hanxue Liang, Xiaohui Zeng, Karsten Kreis, Ziwei Liu, Antonio Torralba, Sanja Fidler, Seung Wook Kim, Huan Ling
arXiv preprint, 14 Jun 2024
[arXiv] [Project]
GradeADreamer: Enhanced Text-to-3D Generation Using Gaussian Splatting and Multi-View Diffusion
Trapoom Ukarapol, Kevin Pruvost
arXiv preprint, 14 Jun 2024
[arXiv] [Code]
ClotheDreamer: Text-Guided Garment Generation with 3D Gaussians
Yufei Liu, Junshu Tang, Chu Zheng, Shijie Zhang, Jinkun Hao, Junwei Zhu, Dongjin Huang
arXiv preprint, 24 Jun 2024
[arXiv] [Project]
GaussianDreamerPro: Text to Manipulable 3D Gaussians with Highly Enhanced Quality
Taoran Yi, Jiemin Fang, Zanwei Zhou, Junjie Wang, Guanjun Wu, Lingxi Xie, Xiaopeng Zhang, Wenyu Liu, Xinggang Wang, Qi Tian
arXiv preprint, 26 Jun 2024
[arXiv] [Project] [Code]
TrAME: Trajectory-Anchored Multi-View Editing for Text-Guided 3D Gaussian Splatting Manipulation
Chaofan Luo, Donglin Di, Yongjia Ma, Zhou Xue, Chen Wei, Xun Yang, Yebin Liu
arXiv preprint, 2 Jul 2024
[arXiv]
HoloDreamer: Holistic 3D Panoramic World Generation from Text Descriptions
Haiyang Zhou, Xinhua Cheng, Wangbo Yu, Yonghong Tian, Li Yuan
arXiv preprint, 21 Jul 2024
[arXiv] [Project]
Connecting Consistency Distillation to Score Distillation for Text-to-3D Generation
Zongrui Li, Minghui Hu, Qian Zheng, Xudong Jiang
ECCV 2024, 18 Jul 2024
[arXiv] [Code]
SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency
Yiming Xie, Chun-Han Yao, Vikram Voleti, Huaizu Jiang, Varun Jampani
arXiv preprint, 24 Jul 2024
[arXiv] [Project] [Code]
DreamCouple: Exploring High Quality Text-to-3D Generation Via Rectified Flow
Hangyu Li, Xiangxiang Chu, Dingyuan Shi
arXiv preprint, 9 Aug 2024
[arXiv]
Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models
Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Chong-Wah Ngo, Tao Mei
ACMMM 2024, 11 Sep 2024
[arXiv] [Code]
DreamMapping: High-Fidelity Text-to-3D Generation via Variational Distribution Mapping
Zeyu Cai, Duotun Wang, Yixun Liang, Zhijing Shao, Ying-Cong Chen, Xiaohang Zhan, Zeyu Wang
arXiv preprint, 8 Sep 2024
[arXiv]
DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors
Thomas Hanwen Zhu, Ruining Li, Tomas Jakab
arXiv preprint, 12 Sep 2024
[arXiv]
DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation
Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhineng Chen, Zuxuan Wu, Yu-Gang Jiang, Tao Mei
ECCV 2024, 11 Sep 2024
[arXiv] [Project]
DreamMesh4D: Video-to-4D Generation with Sparse-Controlled Gaussian-Mesh Hybrid Representation
Zhiqi Li, Yiming Chen, Peidong Liu
NeurIPS 2024, 9 Oct 2024
[arXiv]
RGM: Reconstructing High-fidelity 3D Car Assets with Relightable 3D-GS Generative Model from a Single Image
Xiaoxue Chen, Jv Zheng, Hao Huang, Haoran Xu, Weihao Gu, Kangliang Chen, He xiang, Huan-ang Gao, Hao Zhao, Guyue Zhou, Yaqin Zhang
arXiv preprint, 10 Oct 2024
[arXiv]
DreamSat: Towards a General 3D Model for Novel View Synthesis of Space Objects
Nidhi Mathihalli, Audrey Wei, Giovanni Lavezzi, Peng Mun Siew, Victor Rodriguez-Fernandez, Hodei Urrutxua, Richard Linares
arXiv preprint, 7 Oct 2024
[arXiv] [Code]
Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors
Hritam Basak, Hadi Tabatabaee, Shreekant Gayaka, Ming-Feng Li, Xin Yang, Cheng-Hao Kuo, Arnie Sen, Min Sun, Zhaozheng Yin
arXiv preprint, 12 Oct 2024
[arXiv]
3D-Adapter: Geometry-Consistent Multi-View Diffusion for High-Quality 3D Generation
Hansheng Chen, Bokui Shen, Yulin Liu, Ruoxi Shi, Linqi Zhou, Connor Z. Lin, Jiayuan Gu, Hao Su, Gordon Wetzstein, Leonidas Guibas
arXiv preprint, 24 Oct 2024
[arXiv] [Project] [Code]
CompGS: Unleashing 2D Compositionality for Compositional Text-to-3D via Dynamically Optimizing 3D Gaussians
Chongjian Ge, Chenfeng Xu, Yuanfeng Ji, Chensheng Peng, Masayoshi Tomizuka, Ping Luo, Mingyu Ding, Varun Jampani, Wei Zhan
arXiv preprint, 28 Oct 2024
[arXiv] [Project]]
DiffGS: Functional Gaussian Splatting Diffusion
Junsheng Zhou, Weiqi Zhang, Yu-Shen Liu
NeurIPS 2024, 25 Oct 2024
[arXiv] [Project] [Code]
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation
Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, Yuqian Zhou, Zhe Lin, Alan Yuille
arXiv preprint, 21 Nov 2024
[arXiv] [Project]
Direct and Explicit 3D Generation from a Single Image
Haoyu Wu, Meher Gitika Karumuri, Chuhang Zou, Seungbae Bang, Yuelong Li, Dimitris Samaras, Sunil Hadap
3DV 2025, 17 Nov 2024
[arXiv] [Project] [Video]
PhyCAGE: Physically Plausible Compositional 3D Asset Generation from a Single Image
Han Yan, Mingrui Zhang, Yang Li, Chao Ma, Pan Ji
27 Nov 2024
[arXiv] [Project] [Video]
Turbo3D: Ultra-fast Text-to-3D Generation
Hanzhe Hu, Tianwei Yin, Fujun Luan, Yiwei Hu, Hao Tan, Zexiang Xu, Sai Bi, Shubham Tulsiani, Kai Zhang
5 Dec 2024
[arXiv] [Project]
Text-to-3D Gaussian Splatting with Physics-Grounded Motion Generation
Wenqing Wang, Yun Fu
7 Dec 2024
[arXiv]
DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models
Kevin Miao, Harsh Agrawal, Qihang Zhang, Federico Semeraro, Marco Cavallo, Jiatao Gu, Alexander Toshev
11 Dec 2024
[arXiv]
Interactive Scene Authoring with Specialized Generative Primitives
Clément Jambon, Changwoon Choi, Dongsu Zhang, Olga Sorkine-Hornung, Young Min Kim
20 Dec 2024
[arXiv]
3DGS.zip: A survey on 3D Gaussian Splatting Compression Methods
Milena T. Bagdasarian, Paul Knoll, Florian Barthel, Anna Hilsmann, Peter Eisert, Wieland Morgenstern
arXiv preprint, 17 Jun 2024
[arXiv] [Project]
LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS
Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, Zhangyang Wang
arXiv preprint, 28 Nov 2023
[arXiv] [Project] [Video]
Identifying Unnecessary 3D Gaussians using Clustering for Fast Rendering of 3D Gaussian Splatting
Joongho Jo, Hyeongwon Kim, Jongsun Park
arXiv preprint, 21 Feb 2024
[arXiv]
🔥HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression
Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, Jianfei Cai
ECCV 2024, 21 Mar 2024
Abstract
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel view synthesis, boasting rapid rendering speed with high fidelity. However, the substantial Gaussians and their associated attributes necessitate effective compression techniques. Nevertheless, the sparse and unorganized nature of the point cloud of Gaussians (or anchors in our paper) presents challenges for compression. To address this, we make use of the relations between the unorganized anchors and the structured hash grid, leveraging their mutual information for context modeling, and propose a Hash-grid Assisted Context (HAC) framework for highly compact 3DGS representation. Our approach introduces a binary hash grid to establish continuous spatial consistencies, allowing us to unveil the inherent spatial relations of anchors through a carefully designed context model. To facilitate entropy coding, we utilize Gaussian distributions to accurately estimate the probability of each quantized attribute, where an adaptive quantization module is proposed to enable high-precision quantization of these attributes for improved fidelity restoration. Additionally, we incorporate an adaptive masking strategy to eliminate invalid Gaussians and anchors. Importantly, our work is the pioneer to explore context-based compression for 3DGS representation, resulting in a remarkable size reduction of over 75× compared to vanilla 3DGS, while simultaneously improving fidelity, and achieving over 11× size reduction over SOTA 3DGS compression approach Scaffold-GS. Our code is available here: this https URLCompGS: Efficient 3D Scene Representation via Compressed Gaussian Splatting
Xiangrui Liu, Xinju Wu, Pingping Zhang, Shiqi Wang, Zhu Li, Sam Kwong
arXiv preprint, 15 Apr 2024
[arXiv]
F-3DGS: Factorized Coordinates and Representations for 3D Gaussian Splatting
Xiangyu Sun, Joo Chan Lee, Daniel Rho, Jong Hwan Ko, Usman Ali, Eunbyung Park
arXiv preprint, 27 May 2024
[arXiv] [Project] [Code]
LP-3DGS: Learning to Prune 3D Gaussian Splatting
Zhaoliang Zhang, Tianchen Song, Yongjae Lee, Li Yang, Cheng Peng, Rama Chellappa, Deliang Fan
arXiv preprint, 29 May 2024
[arXiv]
ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model
Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex C. Kot, Bihan Wen
arXiv preprint, 31 May 2024
[arXiv]
Gaussian-Forest: Hierarchical-Hybrid 3D Gaussian Splatting for Compressed Scene Modeling
Fengyi Zhang, Tianjun Zhang, Lin Zhang, Helen Huang, Yadan Luo
arXiv preprint, 13 Jun 2024
[arXiv]
🔥Reducing the Memory Footprint of 3D Gaussian Splatting
Panagiotis Papantonakis, Georgios Kopanas, Bernhard Kerbl, Alexandre Lanvin, George Drettakis
arXiv preprint, 24 Jun 2024
Abstract
3D Gaussian splatting provides excellent visual quality for novel view synthesis, with fast training and real-time rendering; unfortunately, the memory requirements of this method for storing and transmission are unreasonably high. We first analyze the reasons for this, identifying three main areas where storage can be reduced: the number of 3D Gaussian primitives used to represent a scene, the number of coefficients for the spherical harmonics used to represent directional radiance, and the precision required to store Gaussian primitive attributes. We present a solution to each of these issues. First, we propose an efficient, resolution-aware primitive pruning approach, reducing the primitive count by half. Second, we introduce an adaptive adjustment method to choose the number of coefficients used to represent directional radiance for each Gaussian primitive, and finally a codebook-based quantization method, together with a half-float representation for further memory reduction. Taken together, these three components result in a 27 reduction in overall size on disk on the standard datasets we tested, along with a 1.7 speedup in rendering speed. We demonstrate our method on standard datasets and show how our solution results in significantly reduced download times when using the method on a mobile device.Lightweight Predictive 3D Gaussian Splats
Junli Cao, Vidit Goel, Chaoyang Wang, Anil Kag, Ju Hu, Sergei Korolev, Chenfanfu Jiang, Sergey Tulyakov, Jian Ren
arXiv preprint, 27 Jun 2024
[arXiv] [Project]
Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning
Muhammad Salman Ali, Maryam Qamar, Sung-Ho Bae, Enzo Tartaglione
arXiv preprint, 26 Jun 2024
[arXiv]
A Benchmark for Gaussian Splatting Compression and Quality Assessment Study
Qi Yang, Kaifa Yang, Yuke Xing, Yiling Xu, Zhu Li
arXiv preprint, 19 Jul 2024
[arXiv] [Code]
Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields
Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, Eunbyung Park
arXiv preprint, 7 Aug 2024
[arXiv]
MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation
Shuzhao Xie, Weixiang Zhang, Chen Tang, Yunpeng Bai, Rongwei Lu, Shijia Ge, Zhi Wang
ECCV 2024, 15 Sep 2024
[arXiv]
Fast Feedforward 3D Gaussian Splatting Compression
Yihang Chen, Qianyi Wu, Mengyao Li, Weiyao Lin, Mehrtash Harandi, Jianfei Cai
arXiv preprint, 10 Oct 2024
[arXiv] [Project] [Code]
ELMGS: Enhancing memory and computation scaLability through coMpression for 3D Gaussian Splatting
Muhammad Salman Ali, Sung-Ho Bae, Enzo Tartaglione
arXiv preprint, 30 Oct 2024
[arXiv]
A Hierarchical Compression Technique for 3D Gaussian Splatting Compression
He Huang, Wenjie Huang, Qi Yang, Yiling Xu, Zhu li
arXiv preprint, 11 Nov 2024
[arXiv]
HEMGS: A Hybrid Entropy Model for 3D Gaussian Splatting Data Compression
Lei Liu, Zhenghao Chen, Dong Xu
27 Nov 2024
[arXiv]
Temporally Compressed 3D Gaussian Splatting for Dynamic Scenes
Saqib Javed, Ahmad Jarrar Khan, Corentin Dumery, Chen Zhao, Mathieu Salzmann
7 Dec 2024
[arXiv]
3DGStream: On-the-Fly Training of 3D Gaussians for Efficient Streaming of Photo-Realistic Free-Viewpoint Videos
Jiakai Sun, Han Jiao, Guangyuan Li, Zhanjie Zhang, Lei Zhao, Wei Xing
CVPR 2024, 3 Mar 2024
[arXiv] [Project] [Code]
HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression
Yihang Chen, Qianyi Wu, Jianfei Cai, Mehrtash Harandi, Weiyao Lin
arXiv preprint, 12 Mar 2024
[arXiv] [Project] [Code]
LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming
Yuang Shi, Simone Gasparini, Géraldine Morin, Wei Tsang Ooi
arXiv preprint, 27 Aug 2024
[arXiv]
PRoGS: Progressive Rendering of Gaussian Splats
Brent Zoomers, Maarten Wijnants, Ivan Molenaers, Joni Vanherck, Jeroen Put, Lode Jorissen, Nick Michiels
arXiv preprint, 3 Sep 2024
[arXiv]
SwinGS: Sliding Window Gaussian Splatting for Volumetric Video Streaming with Arbitrary Length
Bangya Liu, Suman Banerjee
[arXiv]
QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos
Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello
NeurIPS 2024, 5 Dec 2024
[arXiv] [Project]
Subsurface Scattering for 3D Gaussian Splatting
Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P.A. Lensch
arXiv preprint, 22 Aug 2024
[arXiv] [Project] [Code]
3D Gaussian Splatting in Robotics: A Survey
Siting Zhu, Guangming Wang, Dezhi Kong, Hesheng Wang
arXiv preprint, 16 Oct 2024
[arXiv]
Neural Fields in Robotics: A Survey
Muhammad Zubair Irshad, Mauro Comi, Yen-Chen Lin, Nick Heppert, Abhinav Valada, Rares Ambrus, Zsolt Kira, Jonathan Tremblay
arXiv preprint, 26 Oct 2024
[arXiv] [Project]
Splat-Nav: Safe Real-Time Robot Navigation in Gaussian Splatting Maps
Timothy Chen, Ola Shorinwa, Weijia Zeng, Joseph Bruno, Philip Dames, Mac Schwager
arXiv preprint, 5 Mar 2023
[arXiv]
ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation
Guanxing Lu, Shiyi Zhang, Ziwei Wang, Changliu Liu, Jiwen Lu, Yansong Tang
arXiv preprint, 13 Mar 2024
[arXiv]
Splat-MOVER: Multi-Stage, Open-Vocabulary Robotic Manipulation via Editable Gaussian Splatting
Ola Shorinwa, Johnathan Tucker, Aliyah Smith, Aiden Swann, Timothy Chen, Roya Firoozi, Monroe Kennedy III, Mac Schwager
arXiv preprint, 7 Mar 2024
[arXiv]
Query-based Semantic Gaussian Field for Scene Representation in Reinforcement Learning
Jiaxu Wang, Ziyi Zhang, Qiang Zhang, Jia Li, Jingkai Sun, Mingyuan Sun, Junhao He, Renjing Xu
arXiv preprint, 4 Jun 2024
[arXiv]
Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks
Alex Quach, Makram Chahine, Alexander Amini, Ramin Hasani, Daniela Rus
arXiv preprint, 21 Jun 2024
[arXiv]
Robo-GS: A Physics Consistent Spatial-Temporal Model for Robotic Arm with Hybrid Representation
Haozhe Lou, Yurong Liu, Yike Pan, Yiran Geng, Jianteng Chen, Wenlong Ma, Chenglong Li, Lin Wang, Hengzhen Feng, Lu Shi, Liyi Luo, Yongliang Shi
arXiv preprint, 27 Aug 2024
[arXiv] [Project] [Video]
GaussianPU: A Hybrid 2D-3D Upsampling Framework for Enhancing Color Point Clouds via 3D Gaussian Splatting
Zixuan Guo, Yifan Xie, Weijing Xie, Peng Huang, Fei Ma, Fei Richard Yu
arXiv preprint, 3 Sep 2024
[arXiv]
GraspSplats: Efficient Manipulation with 3D Feature Splatting
Mazeyu Ji, Ri-Zhao Qiu, Xueyan Zou, Xiaolong Wang
arXiv preprint, 3 Sep 2024
[arXiv] [Project] [Video] [Code]
SplatSim: Zero-Shot Sim2Real Transfer of RGB Manipulation Policies Using Gaussian Splatting
Mohammad Nomaan Qureshi, Sparsh Garg, Francisco Yandun, David Held, George Kantor, Abhishesh Silwal
arXiv preprint, 16 Sep 2024
[arXiv]
BEINGS: Bayesian Embodied Image-goal Navigation with Gaussian Splatting
Wugang Meng, Tianfu Wu, Huan Yin, Fumin Zhang
arXiv preprint, 16 Sep 2024
[arXiv]
SAFER-Splat: A Control Barrier Function for Safe Navigation with Online Gaussian Splatting Maps
Timothy Chen, Aiden Swann, Javier Yu, Ola Shorinwa, Riku Murai, Monroe Kennedy III, Mac Schwager
arXiv preprint, 15 Sep 2024
[arXiv] [Project]
RT-GuIDE: Real-Time Gaussian splatting for Information-Driven Exploration
Yuezhan Tao, Dexter Ong, Varun Murali, Igor Spasojevic, Pratik Chaudhari, Vijay Kumar
arXiv preprint, 26 Sep 2024
[arXiv] [Project] [Video]
Language-Embedded Gaussian Splats (LEGS): Incrementally Building Room-Scale Representations with a Mobile Robot
Justin Yu, Kush Hari, Kishore Srinivas, Karim El-Refai, Adam Rashid, Chung Min Kim, Justin Kerr, Richard Cheng, Muhammad Zubair Irshad, Ashwin Balakrishna, Thomas Kollar, Ken Goldberg
arXiv preprint, 26 Sep 2024
[arXiv]
HGS-Planner: Hierarchical Planning Framework for Active Scene Reconstruction Using 3D Gaussian Splatting
Zijun Xu, Rui Jin, Ke Wu, Yi Zhao, Zhiwei Zhang, Jieru Zhao, Zhongxue Gan, Wenchao Ding
arXiv preprint, 26 Sep 2024
[arXiv]
Let's Make a Splan: Risk-Aware Trajectory Optimization in a Normalized Gaussian Splat
Jonathan Michaux, Seth Isaacson, Challen Enninful Adu, Adam Li, Rahul Kashyap Swayampakula, Parker Ewen, Sean Rice, Katherine A. Skinner, Ram Vasudevan
arXiv preprint, 26 Sep 2024
[arXiv]
RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning
Yuxuan Wu, Lei Pan, Wenhua Wu, Guangming Wang, Yanzi Miao, Hesheng Wang
arXiv preprint, 30 Sep 2024
[arXiv]
SplaTraj: Camera Trajectory Generation with Semantic Gaussian Splatting
Xinyi Liu, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi
arXiv preprint, 8 Oct 2024
[arXiv]
Next Best Sense: Guiding Vision and Touch with FisherRF for 3D Gaussian Splatting
Matthew Strong, Boshu Lei, Aiden Swann, Wen Jiang, Kostas Daniilidis, Monroe Kennedy III
arXiv preprint, 7 Oct 2024
[arXiv] [Project] [Video] [Code]
Mode-GS: Monocular Depth Guided Anchored 3D Gaussian Splatting for Robust Ground-View Scene Rendering
Yonghan Lee, Jaehoon Choi, Dongki Jung, Jaeseong Yun, Soohyun Ryu, Dinesh Manocha, Suyong Yeon
arXiv preprint, 6 Oct 2024
[arXiv]
PhotoReg: Photometrically Registering 3D Gaussian Splatting Models
Ziwen Yuan, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi
arXiv preprint, 7 Oct 2024
[arXiv] [Project] [Video] [Code]
L-VITeX: Light-weight Visual Intuition for Terrain Exploration
Antar Mazumder, Zarin Anjum Madhiha
arXiv preprint, 10 Oct 2024
[arXiv]
Gaussian Splatting Visual MPC for Granular Media Manipulation
Wei-Cheng Tseng, Ellina Zhang, Krishna Murthy Jatavallabhula, Florian Shkurti
arXiv preprint, 13 Oct 2024
[arXiv] [Project]
Differentiable Robot Rendering
Ruoshi Liu, Alper Canberk, Shuran Song, Carl Vondrick
arXiv preprint, 17 Oct 2024
[arXiv] [Project] [Video] [Code]
MSGField: A Unified Scene Representation Integrating Motion, Semantics, and Geometry for Robotic Manipulation
Yu Sheng, Runfeng Lin, Lidian Wang, Quecheng Qiu, YanYong Zhang, Yu Zhang, Bei Hua, Jianmin Ji
arXiv preprint, 21 Oct 2024
[arXiv] [Project] [Code]
Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling
Mingtong Zhang, Kaifeng Zhang, Yunzhu Li
arXiv preprint, 24 Oct 2024
[arXiv] [Project] [Code]
E-3DGS: Gaussian Splatting with Exposure and Motion Events
Xiaoting Yin, Hao Shi, Yuhan Bao, Zhenshan Bing, Yiyi Liao, Kailun Yang, Kaiwei Wang
arXiv preprint, 22 Oct 2024
[arXiv] [Code]
ActiveSplat: High-Fidelity Scene Reconstruction through Active Gaussian Splatting
Yuetao Li, Zijia Kuang, Ting Li, Guyue Zhou, Shaohui Zhang, Zike Yan
arXiv preprint, 29 Oct 2024
[arXiv] [Project] [Video]
Get a Grip: Multi-Finger Grasp Evaluation at Scale Enables Robust Sim-to-Real Transfer
Tyler Ga Wei Lum, Albert H. Li, Preston Culbertson, Krishnan Srinivasan, Aaron D. Ames, Mac Schwager, Jeannette Bohg
arXiv preprint, 31 Oct 2024
[arXiv] [Project] [Video]
3DGS-CD: 3D Gaussian Splatting-based Change Detection for Physical Object Rearrangement
Ziqi Lu, Jianbo Ye, John Leonard
arXiv preprint, 6 Nov 2024
[arXiv] [Code]
Object and Contact Point Tracking in Demonstrations Using 3D Gaussian Splatting
Michael Büttner, Jonathan Francis, Helge Rhodin, Andrew Melnik
CoRL 2024, 5 Nov 2024
[arXiv]
Modeling Uncertainty in 3D Gaussian Splatting through Continuous Semantic Splatting
Joey Wilson, Marcelino Almeida, Min Sun, Sachit Mahajan, Maani Ghaffari, Parker Ewen, Omid Ghasemalizadeh, Cheng-Hao Kuo, Arnie Sen
arXiv preprint, 4 Nov 2024
[arXiv]
Through the Curved Cover: Synthesizing Cover Aberrated Scenes with Refractive Field
Liuyue Xie, Jiancong Guo, Laszlo A. Jeni, Zhiheng Jia, Mingyang Li, Yunwen Zhou, Chao Guo
WACV 2025, 10 Nov 2024
[arXiv]
SplatR : Experience Goal Visual Rearrangement with 3D Gaussian Splatting and Dense Feature Matching
Arjun P S, Andrew Melnik, Gora Chand Nandi
arXiv preprint, 21 Nov 2024
[arXiv]
RoboGSim: A Real2Sim2Real Robotic Gaussian Splatting Simulator
Xinhai Li, Jialin Li, Ziheng Zhang, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Kuo-Kun Tseng, Ruiping Wang
18 Nov 2024
[arXiv] [Project]
Multi-robot autonomous 3D reconstruction using Gaussian splatting with Semantic guidance
Jing Zeng, Qi Ye, Tianle Liu, Yang Xu, Jin Li, Jinming Xu, Liang Li, Jiming Chen
3 Dec 2024
[arXiv]
SparseGrasp: Robotic Grasping via 3D Semantic Gaussian Splatting from Sparse Multi-View RGB Images
Junqiu Yu, Xinlin Ren, Yongchong Gu, Haitao Lin, Tianyu Wang, Yi Zhu, Hang Xu, Yu-Gang Jiang, Xiangyang Xue, Yanwei Fu
3 Dec 2024
[arXiv]
ActiveGS: Active Scene Reconstruction using Gaussian Splatting
Liren Jin, Xingguang Zhong, Yue Pan, Jens Behley, Cyrill Stachniss, Marija Popović
23 Dec 2024
[arXiv]
Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination
Leonardo Barcellona, Andrii Zadaianchuk, Davide Allegro, Samuele Papa, Stefano Ghidoni, Efstratios Gavves
19 Dec 2024
[arXiv] [Project]
A Survey on 3D Human Avatar Modeling -- From Reconstruction to Generation
Ruihe Wang, Yukang Cao, Kai Han, Kwan-Yee K. Wong
arXiv preprint, 6 Jun 2024
[arXiv]
Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars
Yang Liu, Xiang Huang, Minghan Qin, Qinwei Lin, Haoqian Wang
arXiv preprint, 27 Nov 2023
[arXiv] [Project]
HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting
Xian Liu, Xiaohang Zhan, Jiaxiang Tang, Ying Shan, Gang Zeng, Dahua Lin, Xihui Liu, Ziwei Liu
arXiv preprint, 28 Nov 2023
[arXiv] [Project]
HUGS: Human Gaussian Splats
Muhammed Kocabas, Jen-Hao Rick Chang, James Gabriel, Oncel Tuzel, Anurag Ranjan
arXiv preprint, 29 Nov 2023
[arXiv]
Gaussian Shell Maps for Efficient 3D Human Generation
Rameen Abdal, Wang Yifan, Zifan Shi, Yinghao Xu, Ryan Po, Zhengfei Kuang, Qifeng Chen, Dit-Yan Yeung, Gordon Wetzstein
arXiv preprint, 29 Nov 2023
[arXiv]
GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
arXiv preprint, 4 Dec 2023
[arXiv] [Project]
GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
Liangxiao Hu, Hongwen Zhang, Yuxiang Zhang, Boyao Zhou, Boning Liu, Shengping Zhang, Liqiang Nie
arXiv preprint, 4 Dec 2023
[arXiv] [Project]
GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians
Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, Matthias Nießner
arXiv preprint, 4 Dec 2023
[arXiv] [Project]
GaussianHead: Impressive 3D Gaussian-based Head Avatars with Dynamic Hybrid Neural Field
Jie Wang, Xianyan Li, Jiucheng Xie, Feng Xu, Hao Gao
arXiv preprint, 4 Dec 2023
[arXiv]
GauHuman: Articulated Gaussian Splatting from Monocular Human Videos
Shoukang Hu, Ziwei Liu
CVPR 2024, 5 Dec, 2023
[arXiv] [Project] [Code]
HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting
Helisa Dhamo, Yinyu Nie, Arthur Moreau, Jifei Song, Richard Shaw, Yiren Zhou, Eduardo Pérez-Pellitero
arXiv preprint, 5 Dec, 2023
[arXiv]
Relightable Gaussian Codec Avatars
Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, Giljoo Nam
CVPR 2024, 5 Dec 2023
[arXiv] [Project]
HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, Lan Xu
arXiv preprint, 6 Dec, 2023
[arXiv]
MonoGaussianAvatar: Monocular Gaussian Point-based Head Avatar
Yufan Chen, Lizhen Wang, Qijing Li, Hongjiang Xiao, Shengping Zhang, Hongxun Yao, Yebin Liu
SIGGRAPH 2024, 7 Dec, 2023
[arXiv] [Project] [Code] [Video]
ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering
Haokai Pang, Heming Zhu, Adam Kortylewski, Christian Theobalt, Marc Habermann
arXiv preprint, 10 Dec 2023
[arXiv] [Project] [Code]
3DGS-Avatar: Animatable Avatars via Deformable 3D Gaussian Splatting
Zhiyin Qian, Shaofei Wang, Marko Mihajlovic, Andreas Geiger, Siyu Tang
CVPR 2024, 14 Dec 2023
[arXiv] [Project] [Code]
GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning
Ye Yuan, Xueting Li, Yangyi Huang, Shalini De Mello, Koki Nagano, Jan Kautz, Umar Iqbal
CVPR 2024, 18 Dec 2023
[arXiv] [Project] [Video]
3D Points Splatting for Real-Time Dynamic Hand Reconstruction
Zheheng Jiang, Hossein Rahmani, Sue Black, Bryan M. Williams
arXiv preprint, 21 Dec 2023
[arXiv]
Human101: Training 100+FPS Human Gaussians in 100s from 1 View
Mingwei Li, Jiachen Tao, Zongxin Yang, Yi Yang
arXiv preprint, 23 Dec 2023
[arXiv] [Code]
Gaussian Shadow Casting for Neural Characters
Luis Bolanos, Shih-Yang Su, Helge Rhodin
arXiv preprint, 11 Jan 2024
[arXiv]
GPAvatar: Generalizable and Precise Head Avatar from Image(s)
Xuangeng Chu, Yu Li, Ailing Zeng, Tianyu Yang, Lijian Lin, Yunfei Liu, Tatsuya Harada
ICLR 2024, 18 Jan 2024
[arXiv] [Code]
PSAvatar: A Point-based Morphable Shape Model for Real-Time Head Avatar Creation with 3D Gaussian Splatting
Zhongyuan Zhao, Zhenyu Bao, Qing Li, Guoping Qiu, Kanglin Liu
arXiv preprint, 23 Jan 2024
[arXiv]
ImplicitDeepfake: Plausible Face-Swapping through Implicit Deepfake Generation using NeRF and Gaussian Splatting
Georgii Stanishevskii, Jakub Steczkiewicz, Tomasz Szczepanik, Sławomir Tadeja, Jacek Tabor, Przemysław Spurek
arXiv preprint, 9 Feb 2024
[arXiv]
GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians
Haimin Luo, Min Ouyang, Zijun Zhao, Suyi Jiang, Longwen Zhang, Qixuan Zhang, Wei Yang, Lan Xu, Jingyi Yu
arXiv preprint, 16 Feb 2024
[arXiv]
HeadStudio: Text to Animatable Head Avatars with 3D Gaussian Splatting
Zhenglin Zhou, Fan Ma, Hehe Fan, Yi Yang
arXiv preprint, 9 Feb 2024
[arXiv]
Rig3DGS: Creating Controllable Portraits from Casual Monocular Videos
Alfredo Rivero, ShahRukh Athar, Zhixin Shu, Dimitris Samaras
arXiv preprint, 6 Feb 2024
[arXiv] [Project]
SplatFace: Gaussian Splat Face Reconstruction Leveraging an Optimizable Surface
Jiahao Luo, Jing Liu, James Davis
arXiv preprint, 27 Mar 2024
[arXiv]
HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior
David Svitov, Pietro Morerio, Lourdes Agapito, Alessio Del Bue
arXiv preprint, 1 Apr 2024
[arXiv]
GoMAvatar: Efficient Animatable Human Modeling from Monocular Video Using Gaussians-on-Mesh
Jing Wen, Xiaoming Zhao, Zhongzheng Ren, Alexander G. Schwing, Shenlong Wang
CVPR 2024, 11 Apr 2024
[arXiv] [Project] [Code]
OccGaussian: 3D Gaussian Splatting for Occluded Human Rendering
Jingrui Ye, Zongkai Zhang, Yujiao Jiang, Qingmin Liao, Wenming Yang, Zongqing Lu
arXiv preprint, 12 Apr 2024
[arXiv]
GaussianTalker: Speaker-specific Talking Head Synthesis via 3D Gaussian Splatting
Hongyun Yu, Zhan Qu, Qihang Yu, Jianchuan Chen, Zhonghua Jiang, Zhiwen Chen, Shengyu Zhang, Jimin Xu, Fei Wu, Chengfei Lv, Gang Yu
arXiv preprint, 22 Apr 2024
[arXiv] [Project] [Video]
3D Gaussian Blendshapes for Head Avatar Animation
Shengjie Ma, Yanlin Weng, Tianjia Shao, Kun Zhou
SIGGRAPH 2024, 30 Apr 2024
[arXiv]
GaussianTalker: Real-Time High-Fidelity Talking Head Synthesis with Audio-Driven 3D Gaussian Splatting
Kyusun Cho, Joungbin Lee, Heeji Yoon, Yeobin Hong, Jaehoon Ko, Sangjun Ahn, Seungryong Kim
arXiv preprint, 24 Apr 2024
[arXiv] [Code]
GSTalker: Real-time Audio-Driven Talking Face Generation via Deformable Gaussian Splatting
Bo Chen, Shoukang Hu, Qi Chen, Chenpeng Du, Ran Yi, Yanmin Qian, Xie Chen
arXiv preprint, 29 Apr 2024
[arXiv]
MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing
Cong Wang, Di Kang, He-Yi Sun, Shen-Han Qian, Zi-Xuan Wang, Linchao Bao, Song-Hai Zhang
arXiv preprint, 29 Apr 2024
[arXiv] [Project]
Tele-Aloha: A Low-budget and High-authenticity Telepresence System Using Sparse RGB Cameras
*Hanzhang Tu, Ruizhi Shao, Xue Dong, Shunyuan Zheng, Hao Zhang, Lili Chen, Meili Wang, Wenyu Li, Siyan Ma, Shengping Zhang, Boyao Zhou, Yebin Liu
SIGGRAPH 2024, 23 May 2024
[arXiv] [Project] [Video]
LAGA: Layered 3D Avatar Generation and Customization via Gaussian Splatting
Jia Gong, Shenyu Ji, Lin Geng Foo, Kang Chen, Hossein Rahmani, Jun Liu
arXiv preprint, 21 May 2024
[arXiv]
Gaussian Control with Hierarchical Semantic Graphs in 3D Human Recovery
Hongsheng Wang, Weiyue Zhang, Sihao Liu, Xinrui Zhou, Shengyu Zhang, Fei Wu, Feng Lin
arXiv preprint, 21 May 2024
[arXiv] [Project] [Code]
Gaussian Head & Shoulders: High Fidelity Neural Upper Body Avatars with Anchor Gaussian Guided Texture Warping
Tianhao Wu, Jing Yang, Zhilin Guo, Jingyi Wan, Fangcheng Zhong, Cengiz Oztireli
arXiv preprint, 20 May 2024
[arXiv] [Project]
FAGhead: Fully Animate Gaussian Head from Monocular Videos
Yixin Xuan, Xinyang Li, Gongxin Yao, Shiwei Zhou, Donghui Sun, Xiaoxin Chen, Yu Pan
arXiv preprint, 27 Jun 2024
[arXiv]
Expressive Gaussian Human Avatars from Monocular RGB Video
Hezhen Hu, Zhiwen Fan, Tianhao Wu, Yihan Xi, Seoyoung Lee, Georgios Pavlakos, Zhangyang Wang
arXiv preprint, 3 Jul 2024
[arXiv] [Project]
MeshAvatar: Learning High-quality Triangular Human Avatars from Multi-view Videos
Yushuo Chen, Zerong Zheng, Zhe Li, Chao Xu, Yebin Liu
arXiv preprint, 11 Jul 2024
[arXiv] [Project] [Code]
Interactive Rendering of Relightable and Animatable Gaussian Avatars
Youyi Zhan, Tianjia Shao, He Wang, Yin Yang, Kun Zhou
arXiv preprint, 15 Jul 2024
[arXiv]
Generalizable Human Gaussians for Sparse View Synthesis
Youngjoong Kwon, Baole Fang, Yixing Lu, Haoye Dong, Cheng Zhang, Francisco Vicente Carrasco, Albert Mosella-Montoro, Jianjin Xu, Shingo Takagi, Daeil Kim, Aayush Prakash, Fernando De la Torre
arXiv preprint, 17 Jul 2024
[arXiv]
iHuman: Instant Animatable Digital Humans From Monocular Videos
Pramish Paudel, Anubhav Khanal, Ajad Chhatkuli, Danda Pani Paudel, Jyoti Tandukar
ECCV 2024, 15 Jul 2024
[arXiv]
HeadGAP: Few-shot 3D Head Avatar via Generalizable Gaussian Priors
Xiaozheng Zheng, Chao Wen, Zhaohu Li, Weiyi Zhang, Zhuo Su, Xu Chang, Yang Zhao, Zheng Lv, Xiaoyuan Zhang, Yongjie Zhang, Guidong Wang, Lan Xu
arXiv preprint, 12 Aug 2024
[arXiv] [Project]
DEGAS: Detailed Expressions on Full-Body Gaussian Avatars
Zhijing Shao, Duotun Wang, Qing-Yao Tian, Yao-Dong Yang, Hengyu Meng, Zeyu Cai, Bo Dong, Yu Zhang, Kang Zhang, Zeyu Wang
arXiv preprint, 20 Aug 2024
[arXiv]
SG-GS: Photo-realistic Animatable Human Avatars with Semantically-Guided Gaussian Splatting
Haoyu Zhao, Chen Yang, Hao Wang, Xingyue Zhao, Wei Shen
arXiv preprint, 19 Aug 2024
[arXiv]
CHASE: 3D-Consistent Human Avatars with Sparse Inputs via Gaussian Splatting and Contrastive Learning
Haoyu Zhao, Hao Wang, Chen Yang, Wei Shen
arXiv preprint, 19 Aug 2024
[arXiv]
Human-VDM: Learning Single-Image 3D Human Gaussian Splatting from Video Diffusion Models
Zhibin Liu, Haoye Dong, Aviral Chharia, Hefeng Wu
arXiv preprint, 4 Sep 2024
[arXiv] [Project] [Code]
Instant Facial Gaussians Translator for Relightable and Interactable Facial Rendering
Dafei Qin, Hongyang Lin, Qixuan Zhang, Kaichun Qiao, Longwen Zhang, Zijun Zhao, Jun Saito, Jingyi Yu, Lan Xu, Taku Komura
arXiv preprint, 11 Sep 2024
[arXiv] [Project] [Video]
GST: Precise 3D Human Body from a Single Image with Gaussian Splatting Transformers
Lorenza Prospero, Abdullah Hamdi, Joao F. Henriques, Christian Rupprecht
arXiv preprint, 6 Sep 2024
[arXiv] [Project] [Video] [Code]
JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation
Sai Tanmay Reddy Chakkera, Aggelina Chatziagapi, Dimitris Samaras
BMVC 2024, 18 Sep 2024
[arXiv] [Project] [Video]
Disco4D: Disentangled 4D Human Generation and Animation from a Single Image
Hui En Pang, Shuai Liu, Zhongang Cai, Lei Yang, Tianwei Zhang, Ziwei Liu
arXiv preprint, 25 Sep 2024
[arXiv] [Project]
Gaussian Deja-vu: Creating Controllable 3D Gaussian Head-Avatars with Enhanced Generalization and Personalization Abilities
Peizhi Yan, Rabab Ward, Qiang Tang, Shan Du
WACV 2024, 23 Sep 2024
[arXiv]
Human Hair Reconstruction with Strand-Aligned 3D Gaussians
Egor Zakharov, Vanessa Sklyarova, Michael Black, Giljoo Nam, Justus Thies, Otmar Hilliges
arXiv preprint, 23 Sep 2024
[arXiv]
EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings
Yingdong Hu, Zhening Liu, Jiawei Shao, Zehong Lin, Jun Zhang
arXiv preprint, 2 Oct 2024
[arXiv] [Project]
DifFRelight: Diffusion-Based Facial Performance Relighting
Mingming He, Pascal Clausen, Ahmet Levent Taşel, Li Ma, Oliver Pilarski, Wenqi Xian, Laszlo Rikker, Xueming Yu, Ryan Burgert, Ning Yu, Paul Debevec
SIGGRAPH Asia 2024, 10 Oct 2024
[arXiv] [Project]
GS-VTON: Controllable 3D Virtual Try-on with Gaussian Splatting
Yukang Cao, Masoud Hadi, Liang Pan, Ziwei Liu
arXiv preprint, 7 Oct 2024
[arXiv]
LoDAvatar: Hierarchical Embedding and Adaptive Levels of Detail with Gaussian Splatting for Enhanced Human Avatars
Xiaonuo Dongye, Hanzhi Guo, Le Luo, Haiyan Jiang, Yihua Bao, Zeyu Tian, Dongdong Weng
arXiv preprint, 28 Oct 2024
[arXiv]
HFGaussian: Learning Generalizable Gaussian Human with Integrated Human Features
Arnab Dey, Cheng-You Lu, Andrew I. Comport, Srinath Sridhar, Chin-Teng Lin, Jean Martinet
arXiv preprint, 5 Nov 2024
[arXiv]
GazeGaussian: High-Fidelity Gaze Redirection with 3D Gaussian Splatting
Xiaobao Wei, Peng Chen, Guangyu Li, Ming Lu, Hui Chen, Feng Tian
20 Nov 2024
[arXiv] [Project]
Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters
Zhiyang Guo, Jinxu Xiang, Kai Ma, Wengang Zhou, Houqiang Li, Ran Zhang
27 Nov 2024
[arXiv] [Project] [Video] [Code]
DynamicAvatars: Accurate Dynamic Facial Avatars Reconstruction and Precise Editing with Diffusion Models
Yangyang Qian, Yuan Sun, Yu Guo
24 Nov 2024
[arXiv]
AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction
Lingteng Qiu, Shenhao Zhu, Qi Zuo, Xiaodong Gu, Yuan Dong, Junfei Zhang, Chao Xu, Zhe Li, Weihao Yuan, Liefeng Bo, Guanying Chen, Zilong Dong
3 Dec 2024
[arXiv] [Project] [Video] [Code]
TimeWalker: Personalized Neural Space for Lifelong Head Avatars
Dongwei Pan, Yang Li, Hongsheng Li, Kwan-Yee Lin
3 Dec 2024
[arXiv] [Project] [Video]
SAGA: Surface-Aligned Gaussian Avatar
Ronghan Chen, Yang Cong, Jiayue Liu
1 Dec 2024
[arXiv] [Project]
MixedGaussianAvatar: Realistically and Geometrically Accurate Head Avatar via Mixed 2D-3D Gaussian Splatting
Peng Chen, Xiaobao Wei, Qingpo Wuwu, Xinyi Wang, Xingyu Xiao, Ming Lu
6 Dec 2024
[arXiv] [Code]
GASP: Gaussian Avatars with Synthetic Priors
Jack Saunders, Charlie Hewitt, Yanan Jian, Marek Kowalski, Tadas Baltrusaitis, Yiye Chen, Darren Cosker, Virginia Estellers, Nicholas Gyde, Vinay P. Namboodiri, Benjamin E Lundell
10 Dec 2024
[arXiv] [Project] [Video]
GAF: Gaussian Avatar Reconstruction from Monocular Videos via Multi-view Diffusion
Jiapeng Tang, Davide Davoli, Tobias Kirschstein, Liam Schoneveld, Matthias Niessner
13 Dec 2024
[arxiv] [Video] [Project]
FaceLift: Single Image to 3D Head with View Generation and GS-LRM
Weijie Lyu, Yi Zhou, Ming-Hsuan Yang, Zhixin Shu
23 Dec 2024
[arXiv] [Project] [Code]
AvatarPerfect: User-Assisted 3D Gaussian Splatting Avatar Refinement with Automatic Pose Suggestion
*Jotaro Sakamiya, I - Chao Shen, Jinsong Zhang, Mustafa Doga Dogan, Takeo Igarashi
20 Dec 2024
[arXiv]
SqueezeMe: Efficient Gaussian Avatars for VR
Shunsuke Saito, Stanislav Pidhorskyi, Igor Santesteban, Forrest Iandola, Divam Gupta, Anuj Pahuja, Nemanja Bartolovic, Frank Yu, Emanuel Garbin, Tomas Simon
19 Dec 2024
[arXiv] [Project]
GraphAvatar: Compact Head Avatars with GNN-Generated 3D Gaussians
Xiaobao Wei, Peng Chen, Ming Lu, Hui Chen, Feng Tian
AAAI 2025, 18 Dec 2024
[arXiv] [Code]
GaussianVTON: 3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting Editing with Image Prompting
Haodong Chen, Yongle Huang, Haojian Huang, Xiangsheng Ge, Dian Shao
arXiv preprint, 13 May 2024
[arXiv]
MOSS: Motion-based 3D Clothed Human Synthesis from Monocular Video
Hongsheng Wang, Xiang Cai, Xi Sun, Jinhong Yue, Shengyu Zhang, Feng Lin, Fei Wu
arXiv preprint, 21 May 2024
[arXiv] [Project] [Code]
GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details
Boqian Li, Xuan Li, Ying Jiang, Tianyi Xie, Feng Gao, Huamin Wang, Yin Yang, Chenfanfu Jiang
arXiv preprint, 20 May 2024
[arXiv] [Project]
NPGA: Neural Parametric Gaussian Avatars
Simon Giebenhain, Tobias Kirschstein, Martin Rünz, Lourdes Agapito, Matthias Nießner
arXiv preprint, 29 May 2024
[arXiv] [Project] [Video]
GGHead: Fast and Generalizable 3D Gaussian Heads
Tobias Kirschstein, Simon Giebenhain, Jiapeng Tang, Markos Georgopoulos, Matthias Nießner
arXiv preprint, 13 Jun 2024
[arXiv] [Projectt] [Video]
Human 3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models
Yuxuan Xue, Xianghui Xie, Riccardo Marin, Gerard Pons-Moll
arXiv preprint, 12 Jun 2024
[arXiv] [Project] [Code]
HumanSplat: Generalizable Single-Image HumanGaussianSplattingwith Structure Priors
Panwang Pan, Zhuo Su, Chenguo Lin, Zhen Fan, Yongjie Zhang, Zeming Li, Tingting Shen, Yadong Mu, Yebin Liu
arXiv preprint, 18 Jun 2024
[arXiv] [Project]
PICA: Physics-Integrated Clothed Avatar
Bo Peng, Yunfan Tao, Haoyu Zhan, Yudong Guo, Juyong Zhang
arXiv preprint, 7 Jul 2024
[arXiv] [Project]
Gaussian Eigen Models for Human Heads
Wojciech Zielonka, Timo Bolkart, Thabo Beeler, Justus Thies
arXiv preprint, 5 Jul 2024
[arXiv] [Project]
Editing Implicit and Explicit Representations of Radiance Fields: A Survey
Arthur Hubert, Gamal Elghazaly, Raphael Frank
23 Dec 2024
[arXiv]
Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions
Keyang Ye, Tianjia Shao, Kun Zhou
arXiv preprint, 22 Nov 2023
[arXiv]
GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting
Yiwen Chen, Zilong Chen, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, Guosheng Lin
arXiv preprint, 24 Nov 2023
[arXiv] [Project]
Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing
Jian Gao, Chun Gu, Youtian Lin, Hao Zhu, Xun Cao, Li Zhang, Yao Yao
arXiv preprint, 27 Nov 2023
[arXiv] [Project]
GART: Gaussian Articulated Template Models
Jiahui Lei, Yufu Wang, Georgios Pavlakos, Lingjie Liu, Kostas Daniilidis
arXiv preprint, 27 Nov 2023
[arXiv] [Project]
Animatable Gaussians: Learning Pose-dependent Gaussian Maps for High-fidelity Human Avatar Modeling
Zhe Li, Zerong Zheng, Lizhen Wang, Yebin Liu
arXiv preprint, 27 Nov 2023
[arXiv] [Project]
Point'n Move: Interactive Scene Object Manipulation on Gaussian Splatting Radiance Fields
Jiajun Huang, Hongchuan Yu
arXiv preprint, 28 Nov, 2023
[arXiv]
TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts
Jingyu Zhuang, Di Kang, Yan-Pei Cao, Guanbin Li, Liang Lin, Ying Shan
SIGGRAPH 2024, 26 Jan 2024
[arXiv]
GaMeS: Mesh-Based Adapting and Modification of Gaussian Splatting
Joanna Waczyńska, Piotr Borycki, Sławomir Tadeja, Jacek Tabor, Przemysław Spurek
arXiv preprint, 2 Feb 2024
[arXiv]
GaussCtrl: Multi-View Consistent Text-Driven 3D Gaussian Splatting Editing
Jing Wu, Jia-Wang Bian, Xinghui Li, Guangrun Wang, Ian Reid, Philip Torr, Victor Adrian Prisacariu
arXiv preprint, 13 Mar 2024
[arXiv] [Project]
Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing
Tian-Xing Xu, Wenbo Hu, Yu-Kun Lai, Ying Shan, Song-Hai Zhang
arXiv preprint, 15 Mar 2024
[arXiv]
View-Consistent 3D Editing with Gaussian Splatting
Yuxuan Wang, Xuanyu Yi, Zike Wu, Na Zhao, Long Chen, Hanwang Zhang
arXiv preprint, 18 Mar 2024
[arXiv] [Project]
🔥Feature Splatting: Language-Driven Physics-Based Scene Synthesis and Editing
Ri-Zhao Qiu, Ge Yang, Weijia Zeng, Xiaolong Wang
ECCV 2024, 1 Apr 2024
Abstract
Scene representations using 3D Gaussian primitives have produced excellent results in modeling the appearance of static and dynamic 3D scenes. Many graphics applications, however, demand the ability to manipulate both the appearance and the physical properties of objects. We introduce Feature Splatting, an approach that unifies physics-based dynamic scene synthesis with rich semantics from vision language foundation models that are grounded by natural language. Our first contribution is a way to distill high-quality, object-centric vision-language features into 3D Gaussians, that enables semi-automatic scene decomposition using text queries. Our second contribution is a way to synthesize physics-based dynamics from an otherwise static scene using a particle-based simulator, in which material properties are assigned automatically via text queries. We ablate key techniques used in this pipeline, to illustrate the challenge and opportunities in using feature-carrying 3D Gaussians as a unified format for appearance, geometry, material properties and semantics grounded on natural language. Project website: this https URLGScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal
Yuxin Wang, Qianyi Wu, Guofeng Zhang, Dan Xu
arXiv preprint, 21 Apr 2024
[arXiv] [Project]
DGE: Direct Gaussian 3D Editing by Consistent Multi-view Editing
Minghao Chen, Iro Laina, Andrea Vedaldi
arXiv preprint, 29 Apr 2024
[arXiv] [Project] [Code]
TIGER: Text-Instructed 3D Gaussian Retrieval and Coherent Editing
Teng Xu, Jiamin Chen, Peng Chen, Youjia Zhang, Junqing Yu, Wei Yang
arXiv preprint, 23 May 2024
[arXiv]
D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup
Joanna Waczyńska, Piotr Borycki, Joanna Kaleta, Sławomir Tadeja, Przemysław Spurek
arXiv preprint, 23 May 2024
[arXiv]
ICE-G: Image Conditional Editing of 3D Gaussian Splats
Vishnu Jaganathan, Hannah Hanyun Huang, Muhammad Zubair Irshad, Varun Jampani, Amit Raj, Zsolt Kira
CVPR AI4CC Workshop 2024, 12 Jun 2024
[arXiv] [Project]
3DEgo: 3D Editing on the Go!
Umar Khalid, Hasan Iqbal, Azib Farooq, Jing Hua, Chen Chen
ECCV 2024, 14 Jul 2024
[arXiv] [Project]
3D Gaussian Editing with A Single Image
Guan Luo, Tian-Xing Xu, Ying-Tian Liu, Xiao-Xiong Fan, Fang-Lue Zhang, Song-Hai Zhang
arXiv preprint, 14 Aug 2024
[arXiv]
LumiGauss: High-Fidelity Outdoor Relighting with 2D Gaussian Splatting
Joanna Kaleta, Kacper Kania, Tomasz Trzcinski, Marek Kowalski
arXiv preprint, 6 Aug 2024
[arXiv]
Generative Object Insertion in Gaussian Splatting with a Multi-View Diffusion Model
Hongliang Zhong, Can Wang, Jingbo Zhang, Jing Liao
arXiv preprint, 25 Sep 2024
[arXiv] [Code]
GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians
Shuyi Jiang, Qihao Zhao, Hossein Rahmani, De Wen Soh, Jun Liu, Na Zhao
arXiv preprint, 2 Oct 2024
[arXiv]
MiraGe: Editable 2D Images using Gaussian Splatting
Joanna Waczyńska, Tomasz Szczepanik, Piotr Borycki, Sławomir Tadeja, Thomas Bohné, Przemysław Spurek
arXiv preprint, 2 Oct 2024
[arXiv]
RNG: Relightable Neural Gaussians
Jiahui Fan, Fujun Luan, Jian Yang, Miloš Hašan, Beibei Wang
arXiv preprint, 29 Sep 2024
[arXiv]
ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing
Jun-Kun Chen, Yu-Xiong Wang
NeurIPS 2024, 7 Nov 2024
[arXiv] [Project]
Neural Surface Priors for Editable Gaussian Splatting
Jakub Szymkowiak, Weronika Jakubowska, Dawid Malarz, Weronika Smolak-Dyżewska, Maciej Zięba, Przemysław Musialski, Wojtek Pałubicki, Przemysław Spurek
27 Not 2024
[arXiv] [Code]
SplatFlow: Multi-View Rectified Flow Model for 3D Gaussian Splatting Synthesis
Hyojun Go, Byeongjun Park, Jiho Jang, Jin-Young Kim, Soonwoo Kwon, Changick Kim
25 Nov 2024
[arXiv] [Project] [Code]
Gaussian Object Carver: Object-Compositional Gaussian Splatting with surfaces completion
Liu Liu, Xinjie Wang, Jiaxiong Qiu, Tianwei Lin, Xiaolin Zhou, Zhizhong Su
3 Dec 2024
[arXiv]
CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D Diffusion
Kai He, Chin-Hsuan Wu, Igor Gilitschenski
2 Dec 2024
[arXiv] [Project]
Diffusion Models with Anisotropic Gaussian Splatting for Image Inpainting
Jacob Fein-Ashley, Benjamin Fein-Ashley
2 Dec 2024
[arXiv]
3DSceneEditor: Controllable 3D Scene Editing with Gaussian Splatting
Ziyang Yan, Lei Li, Yihua Shao, Siyu Chen, Wuzong Kai, Jenq-Neng Hwang, Hao Zhao, Fabio Remondino
2 Dec 2024
[arXiv]
Instant3dit: Multiview Inpainting for Fast Editing of 3D Objects
Amir Barda, Matheus Gadelha, Vladimir G. Kim, Noam Aigerman, Amit H. Bermano, Thibault Groueix
30 Nov 2024
[arXiv] [Project] [Code]
Diffusion-Based Attention Warping for Consistent 3D Scene Editing
Eyal Gomel, Lior Wolf
10 Dec 2024
[arXiv] [Project]
ProGDF: Progressive Gaussian Differential Field for Controllable and Flexible 3D Editing
Yian Zhao, Wanshi Xu, Yang Wu, Weiheng Huang, Zhongqian Sun, Wei Yang
11 Dec 2024
[arXiv]
EditSplat: Multi-View Fusion and Attention-Guided Optimization for View-Consistent 3D Scene Editing with 3D Gaussian Splatting
Dong In Lee, Hyeongcheol Park, Jiyoung Seo, Eunbyung Park, Hyunje Park, Ha Dam Baek, Shin Sangheon, Sangmin kim, Sangpil Kim
16 Dec 2024
[arXiv]
Gaussian Splatting in Style
Abhishek Saroha, Mariia Gladkova, Cecilia Curreli, Tarun Yenamandra, Daniel Cremers
arXiv preprint, 13 Mar 2024
[arXiv]
StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting
Kunhao Liu, Fangneng Zhan, Muyu Xu, Christian Theobalt, Ling Shao, Shijian Lu
arXiv preprint, 12 Mar 2024
[arXiv] [Project] [Code]
StylizedGS: Controllable Stylization for 3D Gaussian Splatting
Dingxi Zhang, Zhuoxun Chen, Yu-Jie Yuan, Fang-Lue Zhang, Zhenliang He, Shiguang Shan, Lin Gao
arXiv preprint, 8 Apr 2024
[arXiv]
LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field
Jiyang Li, Lechao Cheng, Zhangye Wang, Tingting Mu, Jingxuan He
arXiv preprint, 13 Apr 2024
[arXiv]
InFusion: Inpainting 3D Gaussians via Learning Depth Completion from Diffusion Prior
Zhiheng Liu, Hao Ouyang, Qiuyu Wang, Ka Leong Cheng, Jie Xiao, Kai Zhu, Nan Xue, Yu Liu, Yujun Shen, Yang Cao
arXiv preprint, 17 Apr 2024
[arXiv] [Project] [Code]
3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting
Qihang Zhang, Yinghao Xu, Chaoyang Wang, Hsin-Ying Lee, Gordon Wetzstein, Bolei Zhou, Ceyuan Yang
arXiv preprint, 28 May 2024
[arXiv] [Project] [Code]
Enhancing Temporal Consistency in Video Editing by Reconstructing Videos with 3D Gaussian Splatting
Inkyu Shin, Qihang Yu, Xiaohui Shen, In So Kweon, Kuk-Jin Yoon, Liang-Chieh Chen
arXiv preprint, 4 Jun 2024
[arXiv] [Project]
StyleSplat: 3D Object Style Transfer with Gaussian Splatting
Sahil Jain, Avik Kuthiala, Prabhdeep Singh Sethi, Prakanshul Saxena
arXiv preprint, 12 Jul 2024
[arXiv] [Project]
InstantStyleGaussian: Efficient Art Style Transfer with 3D Gaussian Splatting
Xin-Yi Yu, Jun-Xin Yu, Li-Bo Zhou, Yan Wei, Lin-Lin Ou
arXiv preprint, 8 Aug 2024
[arXiv]
PRTGS: Precomputed Radiance Transfer of Gaussian Splats for Real-Time High-Quality Relighting
Yijia Guo, Yuanxi Bai, Liwen Hu, Ziyi Guo, Mianzhi Liu, Yu Cai, Tiejun Huang, Lei Ma
arXiv preprint, 7 Aug 2024
[arXiv]
Towards Realistic Example-based Modeling via 3D Gaussian Stitching
Xinyu Gao, Ziyi Yang, Bingchen Gong, Xiaoguang Han, Sipeng Yang, Xiaogang Jin
arXiv preprint, 28 Aug 2024
[arXiv] [Project]
G-Style: Stylized Gaussian Splatting
Áron Samuel Kovács, Pedro Hermosilla, Renata G. Raidou
arXiv preprint, 28 Aug 2024
[arXiv]
WaSt-3D: Wasserstein-2 Distance for Scene-to-Scene Stylization on 3D Gaussians
Dmytro Kotovenko, Olga Grebenkova, Nikolaos Sarafianos, Avinash Paliwal, Pingchuan Ma, Omid Poursaeed, Sreyas Mohan, Yuchen Fan, Yilei Li, Rakesh Ranjan, Björn Ommer
arXiv preprint, 26 Sep 2024
[arXiv] [Project] [Code]
4DStyleGaussian: Zero-shot 4D Style Transfer with Gaussian Splatting
Wanlin Liang, Hongbin Xu, Weitao Chen, Feng Xiao, Wenxiong Kang
arXiv preprint, 14 Oct 2024
[arXiv]
VeGaS: Video Gaussian Splatting
Weronika Smolak-Dyżewska, Dawid Malarz, Kornel Howil, Jan Kaczmarczyk, Marcin Mazur, Przemysław Spurek
17 Nov 2024
[arXiv] [Code]
🔥PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics
Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, Chenfanfu Jiang
CVPR 2024, 20 Nov 2023
Abstract
We introduce PhysGaussian, a new method that seamlessly integrates physically grounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel motion synthesis. Employing a custom Material Point Method (MPM), our approach enriches 3D Gaussian kernels with physically meaningful kinematic deformation and mechanical stress attributes, all evolved in line with continuum mechanics principles. A defining characteristic of our method is the seamless integration between physical simulation and visual rendering: both components utilize the same 3D Gaussian kernels as their discrete representations. This negates the necessity for triangle/tetrahedron meshing, marching cubes, "cage meshes," or any other geometry embedding, highlighting the principle of "what you see is what you simulate (WS2)." Our method demonstrates exceptional versatility across a wide variety of materials--including elastic entities, metals, non-Newtonian fluids, and granular materials--showcasing its strong capabilities in creating diverse visual content with novel viewpoints and movements. Our project page is at: this https URLGaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting
Yutao Feng, Xiang Feng, Yintong Shang, Ying Jiang, Chang Yu, Zeshun Zong, Tianjia Shao, Hongzhi Wu, Kun Zhou, Chenfanfu Jiang, Yin Yang
arXiv preprint, 27 Jan 2024
[arXiv] [Project] [Video]
VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality
Ying Jiang, Chang Yu, Tianyi Xie, Xuan Li, Yutao Feng, Huamin Wang, Minchen Li, Henry Lau, Feng Gao, Yin Yang, Chenfanfu Jiang
arXiv preprint, 30 Jan 2024
[arXiv] [Project]
A Grid-Free Fluid Solver based on Gaussian Spatial Representation
Jingrui Xing, Bin Wang, Mengyu Chu, Baoquan Chen
arXiv preprint, 28 May 2024
[arXiv]
GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis
Yumeng He, Yunbo Wang, Xiaokang Yang
arXiv preprint, 31 May 2024
[arXiv]
DreamPhysics: Learning Physical Properties of Dynamic 3DGaussianswith Video Diffusion Priors
Tianyu Huang, Yihan Zeng, Hui Li, Wangmeng Zuo, Rynson W. H. Lau
arXiv preprint, 3 Jun 2024
[arXiv] [Code]
GASP: Gaussian Splatting for Physic-Based Simulations
Piotr Borycki, Weronika Smolak, Joanna Waczyńska, Marcin Mazur, Sławomir Tadeja, Przemysław Spurek
arXiv preprint, 9 Sep 2024
[arXiv]
Unleashing the Potential of Multi-modal Foundation Models and Video Diffusion for 4D Dynamic Physical Scene Simulation
Zhuoman Liu, Weicai Ye, Yan Luximon, Pengfei Wan, Di Zhang
arXiv preprint, 21 Nov 2024
[arXiv] [Project]
Automated 3D Physical Simulation of Open-world Scene with Gaussian Splatting
Haoyu Zhao, Hao Wang, Xingyue Zhao, Hongqiu Wang, Zhiyu Wu, Chengjiang Long, Hua Zou
arXiv preprint, 19 Nov 2024
[arXiv]
VR-Doh: Hands-on 3D Modeling in Virtual Reality
Zhaofeng Luo, Zhitong Cui, Shijian Luo, Mengyu Chu, Minchen Li
1 Dec 2024
[arXiv]
Language Embedded 3D Gaussians for Open-Vocabulary Scene Understanding
Jin-Chuan Shi, Miao Wang, Hao-Bin Duan, Shao-Hua Guan
arXiv preprint, 30 Nov 2023
[arXiv]
🔥LangSplat: 3D Language Gaussian Splatting
Minghan Qin, Wanhua Li, Jiawei Zhou, Haoqian Wang, Hanspeter Pfister
CVPR 2024, 26 Dec 2023
Abstract
Humans live in a 3D world and commonly use natural language to interact with a 3D scene. Modeling a 3D language field to support open-ended language queries in 3D has gained increasing attention recently. This paper introduces LangSplat, which constructs a 3D language field that enables precise and efficient open-vocabulary querying within 3D spaces. Unlike existing methods that ground CLIP language embeddings in a NeRF model, LangSplat advances the field by utilizing a collection of 3D Gaussians, each encoding language features distilled from CLIP, to represent the language field. By employing a tile-based splatting technique for rendering language features, we circumvent the costly rendering process inherent in NeRF. Instead of directly learning CLIP embeddings, LangSplat first trains a scene-wise language autoencoder and then learns language features on the scene-specific latent space, thereby alleviating substantial memory demands imposed by explicit modeling. Existing methods struggle with imprecise and vague 3D language fields, which fail to discern clear boundaries between objects. We delve into this issue and propose to learn hierarchical semantics using SAM, thereby eliminating the need for extensively querying the language field across various scales and the regularization of DINO features. Extensive experimental results show that LangSplat significantly outperforms the previous state-of-the-art method LERF by a large margin. Notably, LangSplat is extremely efficient, achieving a 199 × speedup compared to LERF at the resolution of 1440 × 1080. We strongly recommend readers to check out our video results at this https URL[arXiv] [Project] [Code] [Video]
Semantic Anything in 3D Gaussians
Xu Hu, Yuxi Wang, Lue Fan, Junsong Fan, Junran Peng, Zhen Lei, Qing Li, Zhaoxiang Zhang
arXiv preprint, 31 Jan 2024
[arXiv]
Semantic Gaussians: Open-Vocabulary Scene Understanding with 3D Gaussian Splatting
Jun Guo, Xiaojian Ma, Yue Fan, Huaping Liu, Qing Li
arXiv preprint, 22 Mar 2024
[arXiv] [Project] [Code]
CLIP-GS: CLIP-Informed Gaussian Splatting for Real-time and View-consistent 3D Semantic Understanding
Guibiao Liao, Jiankun Li, Zhenyu Bao, Xiaoqing Ye, Jingdong Wang, Qing Li, Kanglin Liu
arXiv preprint, 22 Apr 2024
[arXiv] [Code]
🔥HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting
Hongyu Zhou, Jiahao Shao, Lu Xu, Dongfeng Bai, Weichao Qiu, Bingbing Liu, Yue Wang, Andreas Geiger, Yiyi Liao
CVPR 2024, 19 Mar 2024
Abstract
Holistic understanding of urban scenes based on RGB images is a challenging yet important problem. It encompasses understanding both the geometry and appearance to enable novel view synthesis, parsing semantic labels, and tracking moving objects. Despite considerable progress, existing approaches often focus on specific aspects of this task and require additional inputs such as LiDAR scans or manually annotated 3D bounding boxes. In this paper, we introduce a novel pipeline that utilizes 3D Gaussian Splatting for holistic urban scene understanding. Our main idea involves the joint optimization of geometry, appearance, semantics, and motion using a combination of static and dynamic 3D Gaussians, where moving object poses are regularized via physical constraints. Our approach offers the ability to render new viewpoints in real-time, yielding 2D and 3D semantic information with high accuracy, and reconstruct dynamic scenes, even in scenarios where 3D bounding box detection are highly noisy. Experimental results on KITTI, KITTI-360, and Virtual KITTI 2 demonstrate the effectiveness of our approach.Memorize What Matters: Emergent Scene Decomposition from Multitraverse
Yiming Li, Zehong Wang, Yue Wang, Zhiding Yu, Zan Gojcic, Marco Pavone, Chen Feng, Jose M. Alvarez
arXiv preprint, 27 May 2024
[arXiv] [Project] [Code]
FastLGS: Speeding up Language Embedded Gaussians with Feature Grid Mapping
Yuzhou Ji, He Zhu, Junshu Tang, Wuyi Liu, Zhizhong Zhang, Yuan Xie, Lizhuang Ma, Xin Tan
arXiv preprint, 4 Jun 2024
[arXiv]
OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding
Yanmin Wu, Jiarui Meng, Haijie Li, Chenming Wu, Yahao Shi, Xinhua Cheng, Chen Zhao, Haocheng Feng, Errui Ding, Jingdong Wang, Jian Zhang
arXiv preprint, 4 Jun 2024
[arXiv] [Project]
EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting
Daiwei Zhang, Gengyan Li, Jiajie Li, Mickaël Bressieux, Otmar Hilliges, Marc Pollefeys, Luc Van Gool, Xi Wang
arXiv preprint, 28 Jun 2024
[arXiv]
Scaling 3D Reasoning with LMMs to Large Robot Mission Environments Using Datagraphs
W. J. Meijer, A.C. Kemmeren, E.H.J. Riemens, J.E. Fransman, M. van Bekkum, G.J. Burghouts, J.D. van Mil
RSS Workshop on Semantics for Robotics 2024, 15 Jul 2024
[arXiv]
SpectralGaussians: Semantic, spectral 3D Gaussian splatting for multi-spectral scene representation, visualization and analysis
Saptarshi Neil Sinha, Holger Graf, Michael Weinmann
arXiv preprint, 13 Aug 2024
[arXiv]
🔥ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining
Qi Ma, Yue Li, Bin Ren, Nicu Sebe, Ender Konukoglu, Theo Gevers, Luc Van Gool, Danda Pani Paudel
arXiv preprint, 20 Aug 2024
Abstract
3D Gaussian Splatting (3DGS) has become the de facto method of 3D representation in many vision tasks. This calls for the 3D understanding directly in this representation space. To facilitate the research in this direction, we first build a large-scale dataset of 3DGS using the commonly used ShapeNet and ModelNet datasets. Our dataset ShapeSplat consists of 65K objects from 87 unique categories, whose labels are in accordance with the respective datasets. The creation of this dataset utilized the compute equivalent of 2 GPU years on a TITAN XP GPU. We utilize our dataset for unsupervised pretraining and supervised finetuning for classification and segmentation tasks. To this end, we introduce \textbf{\textit{Gaussian-MAE}}, which highlights the unique benefits of representation learning from Gaussian parameters. Through exhaustive experiments, we provide several valuable insights. In particular, we show that (1) the distribution of the optimized GS centroids significantly differs from the uniformly sampled point cloud (used for initialization) counterpart; (2) this change in distribution results in degradation in classification but improvement in segmentation tasks when using only the centroids; (3) to leverage additional Gaussian parameters, we propose Gaussian feature grouping in a normalized feature space, along with splats pooling layer, offering a tailored solution to effectively group and embed similar Gaussians, which leads to notable improvement in finetuning tasks.[arXiv]
GS-PT: Exploiting 3D Gaussian Splatting for Comprehensive Point Cloud Understanding via Self-supervised Learning
Keyi Liu, Yeqi Luo, Weidong Yang, Jingyi Xu, Zhijun Li, Wen-Ming Chen, Ben Fei
arXiv preprint, 8 Sep 2024
[arXiv]
Gradient-Driven 3D Segmentation and Affordance Transfer in Gaussian Splatting Using 2D Masks
Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar
arXiv preprint, 18 Sep 2024
[arXiv] [Project] [Code]
EdgeGaussians -- 3D Edge Mapping via Gaussian Splatting
Kunal Chelani, Assia Benbihi, Torsten Sattler, Fredrik Kahl
arXiv preprint, 19 Sep 2024
[arXiv] [Code]
SRIF: Semantic Shape Registration Empowered by Diffusion-based Image Morphing and Flow Estimation
Mingze Sun, Chen Guo, Puhua Jiang, Shiwei Mao, Yurun Chen, Ruqi Huang
arXiv preprint, 18 Sep 2024
[arXiv]
Semantics-Controlled Gaussian Splatting for Outdoor Scene Reconstruction and Rendering in Virtual Reality
Hannah Schieber, Jacob Young, Tobias Langlotz, Stefanie Zollmann, Daniel Roth
arXiv preprint, 24 Sep 2024
[arXiv]
3DGS-DET: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection
Yang Cao, Yuanliang Jv, Dan Xu
arXiv preprint, 2 Oct 2024
[arXiv] [Code]
Gaussian-Det: Learning Closed-Surface Gaussians for 3D Object Detection
Hongru Yan, Yu Zheng, Yueqi Duan
arXiv preprint, 2 Oct 2024
[arXiv]
3D Vision-Language Gaussian Splatting
Qucheng Peng, Benjamin Planche, Zhongpai Gao, Meng Zheng, Anwesa Choudhuri, Terrence Chen, Chen Chen, Ziyan Wu
arXiv preprint, 10 Oct 2024
[arXiv]
4-LEGS: 4D Language Embedded Gaussian Splatting
Gal Fiebelman, Tamir Cohen, Ayellet Morgenstern, Peter Hedman, Hadar Averbuch-Elor
arXiv preprint, 14 Oct 2024
[arXiv] [Project]
3DArticCyclists: Generating Simulated Dynamic 3D Cyclists for Human-Object Interaction (HOI) and Autonomous Driving Applications
Eduardo R. Corral-Soto, Yang Liu, Tongtong Cao, Yuan Ren, Liu Bingbing
arXiv preprint, 14 Oct 2024
[arXiv]
MVSDet: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps
Yating Xu, Chen Li, Gim Hee Lee
NeurIPS 2024, 28 Oct 2024
[arXiv] [Code]
Splat: FAST-Fast, Ambiguity-Free Semantics Transfer in Gaussian Splatting
Ola Shorinwa, Jiankai Sun, Mac Schwager
arXiv preprint, 20 Nov 2024
[arXiv]
GLS: Geometry-aware 3D Language Gaussian Splatting
Jiaxiong Qiu, Liu Liu, Zhizhong Su, Tianwei Lin
27 Nov 2024
[arXiv] [Code]
UnitedVLN: Generalizable Gaussian Splatting for Continuous Vision-Language Navigation
Guangzhao Dai, Jian Zhao, Yuantao Chen, Yusen Qin, Hao Zhao, Guosen Xie, Yazhou Yao, Xiangbo Shu, Xuelong Li
25 Nov 2024
[arXiv]
Planar Gaussian Splatting
Farhad G. Zanjani, Hong Cai, Hanno Ackermann, Leila Mirvakhabova, Fatih Porikli
2 Dec 2024
[arXiv]
LineGS : 3D Line Segment Representation on 3D Gaussian Splatting
Chenggang Yang, Yuang Shi, Wei Tsang Ooi
13 Dec 2024
[arXiv]
SparseLGS: Sparse View Language Embedded Gaussian Splatting
Jun Hu, Zhang Chen, Zhong Li, Yi Xu, Juyong Zhang
3 Dec 2024
[arXiv] [Project]
Occam's LGS: A Simple Approach for Language Gaussian Splatting
Jiahuan Cheng, Jan-Nico Zaech, Luc Van Gool, Danda Pani Paudel
2 Dec 2024
[arXiv] [Project] [Code]
ChatSplat: 3D Conversational Gaussian Splatting
Hanlin Chen, Fangyin Wei, Gim Hee Lee
1 Dec 2024
[arXiv]
Feat2GS: Probing Visual Foundation Models with Gaussian Splatting
Yue Chen, Xingyu Chen, Anpei Chen, Gerard Pons-Moll, Yuliang Xiu
12 Dec 2024
[arXiv] [Project]
SLGaussian: Fast Language Gaussian Splatting in Sparse Views
Kangjie Chen, BingQuan Dai, Minghan Qin, Dongbin Zhang, Peihao Li, Yingshuang Zou, Haoqian Wang
11 Dec 2024
[arXiv]
LangSurf: Language-Embedded Surface Gaussians for 3D Scene Understanding
Hao Li, Roy Qin, Zhengyu Zou, Diqi He, Bohan Li, Bingquan Dai, Dingewn Zhang, Junwei Han
23 Dec 2024
[arXiv] [Project] [Code]
GSemSplat: Generalizable Semantic 3D Gaussian Splatting from Uncalibrated Image Pairs
Xingrui Wang, Cuiling Lan, Hanxin Zhu, Zhibo Chen, Yan Lu
22 Dec 2024
[arXiv]
GAGS: Granularity-Aware Feature Distillation for Language Gaussian Splatting
Yuning Peng, Haiping Wang, Yuan Liu, Chenglu Wen, Zhen Dong, Bisheng Yang
18 Dec 2024
[arXiv] [Project] [Code]
Vivar: A Generative AR System for Intuitive Multi-Modal Sensor Data Presentation
Yunqi Guo, Kaiyuan Hou, Heming Fu, Hongkai Chen, Zhenyu Yan, Guoliang Xing, Xiaofan Jiang
18 Dec 2024
[arXiv]
2D-Guided 3D Gaussian Segmentation
Kun Lan, Haoran Li, Haolin Shi, Wenjun Wu, Yong Liao, Lin Wang, Pengyuan Zhou
arXiv preprint, 26 Dec 2023
[arXiv]
CoSSegGaussians: Compact and Swift Scene Segmenting 3D Gaussians
Bin Dou, Tianyu Zhang, Yongjia Ma, Zhaohui Wang, Zejian Yuan
arXiv preprint, 11 Jan 2024
[arXiv] [Project]
OMEGAS: Object Mesh Extraction from Large Scenes Guided by Gaussian Segmentation
Lizhi Wang, Feng Zhou, Jianqin Yin
arXiv preprint, 24 Apr 2024
[arXiv] [Code]
RT-GS2: Real-Time Generalizable Semantic Segmentation for 3D Gaussian Representations of Radiance Fields
Mihnea-Bogdan Jurca, Remco Royen, Ion Giosan, Adrian Munteanu
arXiv preprint, 28 May 2024
[arXiv]
Fast and Efficient: Mask Neural Fields for 3D Scene Segmentation
Zihan Gao, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Yuwei Guo, Shuyuan Yang
arXiv preprintm, 1 Jul 2024
[arXiv]
Segment Any 4D Gaussians
Shengxiang Ji, Guanjun Wu, Jiemin Fang, Jiazhong Cen, Taoran Yi, Wenyu Liu, Qi Tian, Xinggang Wang
arXiv preprint, 5 Jul 2024
[arXiv] [Project]
Click-Gaussian: Interactive Segmentation to Any 3D Gaussians
Seokhun Choi, Hyeonseop Song, Jaechul Kim, Taehyeong Kim, Hoseok Do
ECCV 2024, 16 Jul 2024
[arXiv] [Project]
🔥FlashSplat: 2D to 3D Gaussian Splatting Segmentation Solved Optimally
Qiuhong Shen, Xingyi Yang, Xinchao Wang
ECCV 2024, 12 Sep 2024
Abstract
This study addresses the challenge of accurately segmenting 3D Gaussian Splatting from 2D masks. Conventional methods often rely on iterative gradient descent to assign each Gaussian a unique label, leading to lengthy optimization and sub-optimal solutions. Instead, we propose a straightforward yet globally optimal solver for 3D-GS segmentation. The core insight of our method is that, with a reconstructed 3D-GS scene, the rendering of the 2D masks is essentially a linear function with respect to the labels of each Gaussian. As such, the optimal label assignment can be solved via linear programming in closed form. This solution capitalizes on the alpha blending characteristic of the splatting process for single step optimization. By incorporating the background bias in our objective function, our method shows superior robustness in 3D segmentation against noises. Remarkably, our optimization completes within 30 seconds, about 50× faster than the best existing methods. Extensive experiments demonstrate the efficiency and robustness of our method in segmenting various scenes, and its superior performance in downstream tasks such as object removal and inpainting. Demos and code will be available at this https URL.PLGS: Robust Panoptic Lifting with 3D Gaussian Splatting
Yu Wang, Xiaobao Wei, Ming Lu, Guoliang Kang
arXiv preprint, 23 Oct 2024
[arXiv]
GaussianCut: Interactive segmentation via graph cut for 3D Gaussian Splatting
Umangi Jain, Ashkan Mirzaei, Igor Gilitschenski
arXiv preprint, 12 Nov 2024
[arXiv]
GradiSeg: Gradient-Guided Gaussian Segmentation with Enhanced 3D Boundary Precision
Zehao Li, Wenwei Han, Yujun Cai, Hao Jiang, Baolong Bi, Shuqin Gao, Honglong Zhao, Zhaoqi Wang
30 Nov 2024
[arXiv]
Efficient Semantic Splatting for Remote Sensing Multi-view Segmentation
Zipeng Qi, Hao Chen, Haotian Zhang, Zhengxia Zou, Zhenwei Shi
12 Dec 2024
[arXiv]
SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians
Siyun Liang, Sen Wang, Kunyi Li, Michael Niemeyer, Stefano Gasperini, Nassir Navab, Federico Tombari
13 Dec 2024
[arXiv]
DCSEG: Decoupled 3D Open-Set Segmentation using Gaussian Splatting
Luis Wiedmann, Luca Wiehe, David Rozenberszki
14 Dec 2024
[arXiv]
Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting
Ziyi Yang, Xinyu Gao, Yangtian Sun, Yihua Huang, Xiaoyang Lyu, Wen Zhou, Shaohui Jiao, Xiaojuan Qi, Xiaogang Jin
arXiv preprint, 24 Feb 2024
[arXiv]
NeRF and Gaussian Splatting SLAM in the Wild
Fabian Schmidt, Markus Enzweiler, Abhinav Valada
4 Dec 2024
[arXiv] [Code]
GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting
Chi Yan, Delin Qu, Dan Xu, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li
CVPR 2024, 20 Nov 2023
[arXiv] [Project]
🔥SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM
Nikhil Keetha, Jay Karhade, Krishna Murthy Jatavallabhula, Gengshan Yang, Sebastian Scherer, Deva Ramanan, Jonathon Luiten
CVPR 2024, 4 Dec 2023
Abstract
Dense simultaneous localization and mapping (SLAM) is crucial for robotics and augmented reality applications. However, current methods are often hampered by the non-volumetric or implicit way they represent a scene. This work introduces SplaTAM, an approach that, for the first time, leverages explicit volumetric representations, i.e., 3D Gaussians, to enable high-fidelity reconstruction from a single unposed RGB-D camera, surpassing the capabilities of existing methods. SplaTAM employs a simple online tracking and mapping system tailored to the underlying Gaussian representation. It utilizes a silhouette mask to elegantly capture the presence of scene density. This combination enables several benefits over prior representations, including fast rendering and dense optimization, quickly determining if areas have been previously mapped, and structured map expansion by adding more Gaussians. Extensive experiments show that SplaTAM achieves up to 2x superior performance in camera pose estimation, map construction, and novel-view synthesis over existing methods, paving the way for more immersive high-fidelity SLAM applications.[arXiv] [Project] [Code] [Video]
Gaussian-SLAM: Photo-realistic Dense SLAM with Gaussian Splatting
Vladimir Yugay, Yue Li, Theo Gevers, Martin R. Oswald
6 Dec 2023
[arXiv]
🔥Gaussian Splatting SLAM
Hidenobu Matsuki, Riku Murai, Paul H.J. Kelly, Andrew J. Davison
CVPR 2024, 11 Dec 2024
Abstract
We present the first application of 3D Gaussian Splatting in monocular SLAM, the most fundamental but the hardest setup for Visual SLAM. Our method, which runs live at 3fps, utilises Gaussians as the only 3D representation, unifying the required representation for accurate, efficient tracking, mapping, and high-quality rendering. Designed for challenging monocular settings, our approach is seamlessly extendable to RGB-D SLAM when an external depth sensor is available. Several innovations are required to continuously reconstruct 3D scenes with high fidelity from a live camera. First, to move beyond the original 3DGS algorithm, which requires accurate poses from an offline Structure from Motion (SfM) system, we formulate camera tracking for 3DGS using direct optimisation against the 3D Gaussians, and show that this enables fast and robust tracking with a wide basin of convergence. Second, by utilising the explicit nature of the Gaussians, we introduce geometric verification and regularisation to handle the ambiguities occurring in incremental 3D dense reconstruction. Finally, we introduce a full SLAM system which not only achieves state-of-the-art results in novel view synthesis and trajectory estimation but also reconstruction of tiny and even transparent objects.LIV-GaussMap: LiDAR-Inertial-Visual Fusion for Real-time 3D Radiance Field Map Rendering
Sheng Hong, Junjie He, Xinhu Zheng, Hesheng Wang, Hao Fang, Kangcheng Liu, Chunran Zheng, Shaojie Shen
arXiv preprint, 26 Jan 2024
[arXiv]
🔥SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAM
Mingrui Li, Shuhong Liu, Heng Zhou
arXiv preprint, 5 Feb 2024
[arXiv]
MoD-SLAM: Monocular Dense Mapping for Unbounded 3D Scene Reconstruction
Heng Zhou, Zhetao Guo, Shuhong Liu, Lechen Zhang, Qihao Wang, Yuxiang Ren, Mingrui Li
arXiv preprint, 6 Feb 2024
[arXiv]
SemGauss-SLAM: Dense Semantic Gaussian Splatting SLAM
Siting Zhu, Renjie Qin, Guangming Wang, Jiuming Liu, Hesheng Wang
arXiv preprint, 13 Mar 2024
[arXiv]
RGBD GS-ICP SLAM
Seongbo Ha, Jiung Yeon, Hyeonwoo Yu
arXiv preprint, 19 Mar 2024
[arXiv]
NEDS-SLAM: A Novel Neural Explicit Dense Semantic SLAM Framework using 3D Gaussian Splatting
Yiming Ji, Yang Liu, Guanghu Xie, Boyu Ma, Zongwu Xie
arXiv preprint, 18 Mar 2024
[arXiv]
CG-SLAM: Efficient Dense RGB-D SLAM in a Consistent Uncertainty-aware 3D Gaussian Field
Jiarui Hu, Xianhao Chen, Boyin Feng, Guanglin Li, Liangjing Yang, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
arXiv preprint, 24 Mar 2024
[arXiv] [Project] [Code]
RGBD GS-ICP SLAM
Seongbo Ha, Jiung Yeon, Hyeonwoo Yu
arXiv preprint, 19 Mar 2024
[arXiv]
HGS-Mapping: Online Dense Mapping Using Hybrid Gaussian Representation in Urban Scenes
Ke Wu, Kaizhao Zhang, Zhiwei Zhang, Shanshuai Yuan, Muer Tie, Julong Wei, Zijun Xu, Jieru Zhao, Zhongxue Gan, Wenchao Ding
arXiv preprint, 29 Mar 2024
[arXiv]
MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements
Lisong C. Sun, Neel P. Bhatt, Jonathan C. Liu, Zhiwen Fan, Zhangyang Wang, Todd E. Humphreys, Ufuk Topcu
arXiv preprint, 1 Apr 2024
[arXiv] [Project] [Video]
Gaussian-LIC: Photo-realistic LiDAR-Inertial-Camera SLAM with 3D Gaussian Splatting
Xiaolei Lang, Laijian Li, Hang Zhang, Feng Xiong, Mu Xu, Yong Liu, Xingxing Zuo, Jiajun Lv
IROS 2024, 10 Apr 2024
[arXiv]
RTG-SLAM: Real-time 3D Reconstruction at Scale using Gaussian Splatting
Zhexi Peng, Tianjia Shao, Yong Liu, Jingke Zhou, Yin Yang, Jingdong Wang, Kun Zhou
arXiv preprint, 30 Apr 2024
[arXiv]
MGS-SLAM: Monocular Sparse Tracking and Gaussian Mapping with Depth Smooth Regularization
Pengcheng Zhu, Yaoming Zhuang, Baoquan Chen, Li Li, Chengdong Wu, Zhanlin Liu
arXiv preprint, 10 May 2024
[arXiv]
NGM-SLAM: Gaussian Splatting SLAM with Radiance Field Submap
Mingrui Li, Jingwei Huang, Lei Sun, Aaron Xuxiang Tian, Tianchen Deng, Hongyu Wang
arXiv preprint, 9 May 2024
[arXiv]
GS-Planner: A Gaussian-Splatting-based Planning Framework for Active High-Fidelity Reconstruction
Rui Jin, Yuman Gao, Haojian Lu, Fei Gao
arXiv preprint, 16 May 2024
[arXiv]
GaussNav: Gaussian Splatting for Visual Navigation
Xiaohan Lei, Min Wang, Wengang Zhou, Houqiang Li
arXiv preprint, 18 Mar 2024
[arXiv]
Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians
Erik Sandström, Keisuke Tateno, Michael Oechsle, Michael Niemeyer, Luc Van Gool, Martin R. Oswald, Federico Tombari
arXiv preprint, 26 May 2024
[arXiv] [Code]
Structure Gaussian SLAM with Manhattan World Hypothesis
Shuhong Liu, Heng Zhou, Liuzhuozheng Li, Yun Liu, Tianchen Deng, Yiming Zhou, Mingrui Li
arXiv preprint, 30 May 2024
[arXiv]
From Perfect to Noisy World Simulation: Customizable Embodied Multi-modal Perturbations for SLAM Robustness Benchmarking
Xiaohao Xu, Tianyi Zhang, Sibo Wang, Xiang Li, Yongqi Chen, Ye Li, Bhiksha Raj, Matthew Johnson-Roberson, Xiaonan Huang
arXiv preprint, 24 Jun 2024
[arXiv]
I^2-SLAM: Inverting Imaging Process for Robust Photorealistic Dense SLAM
Gwangtak Bae, Changwoon Choi, Hyeongjun Heo, Sang Min Kim, Young Min Kim
ECCV 2024, 16 Jul 2024
[arXiv]
Evaluating Modern Approaches in 3D Scene Reconstruction: NeRF vs Gaussian-Based Methods
Yiming Zhou, Zixuan Zeng, Andi Chen, Xiaofan Zhou, Haowei Ni, Shiyao Zhang, Panfeng Li, Liangxi Liu, Mengyao Zheng, Xupeng Chen
2024 6th International Conference on Data-driven Optimization of Complex Systems, 8 Aug 2024
[arXiv]
Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis
Zhongche Qu, Zhi Zhang, Cong Liu, Jianhua Yin
arXiv preprint, 10 Aug 2024
[arXiv]
Towards Real-Time Gaussian Splatting: Accelerating 3DGS through Photometric SLAM
Yan Song Hu, Dayou Mao, Yuhao Chen, John Zelek
arXiv preprint, 7 Aug 2024
[arXiv]
IG-SLAM: Instant Gaussian SLAM
F. Aykut Sarikamis, A. Aydin Alatan
arXiv preprint, 2 Aug 2024
[arXiv]
MotionGS : Compact Gaussian Splatting SLAM by Motion Filter
Xinli Guo, Peng Han, Weidong Zhang, Hongtian Chen
arXiv preprint, 18 May 2024
[arXiv]
GSFusion: Online RGB-D Mapping Where Gaussian Splatting Meets TSDF Fusion
Jiaxin Wei, Stefan Leutenegger
arXiv preprint, 22 Aug 2024
[arXiv] [Code]
LoopSplat: Loop Closure by Registering 3D Gaussian Splats
Liyuan Zhu, Yue Li, Erik Sandström, Shengyu Huang, Konrad Schindler, Iro Armeni
3DV 2025, 19 Aug 2024
[arXiv] [Project] [Code]
OG-Mapping: Octree-based Structured 3DGaussians for Online Dense Mapping
Meng Wang, Junyi Wang, Changqun Xia, Chen Wang, Yue Qi
arXiv preprint, 30 Aug 2024
[arXiv]
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
Chunran Zheng, Wei Xu, Zuhao Zou, Tong Hua, Chongjian Yuan, Dongjiao He, Bingyang Zhou, Zheng Liu, Jiarong Lin, Fangcheng Zhu, Yunfan Ren, Rong Wang, Fanle Meng, Fu Zhang
arXiv preprint, 26 Aug 2024
[arXiv]
Hi-SLAM: Scaling-up Semantics in SLAM with a Hierarchically Categorical Gaussian Splatting
Boying Li, Zhixi Cai, Yuan-Fang Li, Ian Reid, Hamid Rezatofighi
arXiv preprint, 19 Sep 2024
[arXiv]
GLC-SLAM: Gaussian Splatting SLAM with Efficient Loop Closure
Ziheng Xu, Qingfeng Li, Chen Chen, Xuefeng Liu, Jianwei Niu
arXiv preprint, 17 Sep 2024
[arXiv]
Go-SLAM: Grounded Object Segmentation and Localization with Gaussian Splatting SLAM
Phu Pham, Dipam Patel, Damon Conover, Aniket Bera
arXiv preprint, 26 Sep 2024
[arXiv]
CaRtGS: Computational Alignment for Real-Time Gaussian Splatting SLAM
Dapeng Feng, Zhiqiang Chen, Yizhen Yin, Shipeng Zhong, Yuhua Qi, Hongbo Chen
arXiv preprint, 1 Oct 2024
[arXiv]
Robust Gaussian Splatting SLAM by Leveraging Loop Closure
Zunjie Zhu, Youxu Fang, Xin Li, Chengang Yan, Feng Xu, Chau Yuen, Yanyan Li
arXiv preprint, 30 Sep 2024
[arXiv]
ES-Gaussian: Gaussian Splatting Mapping via Error Space-Based Gaussian Completion
Lu Chen, Yingfu Zeng, Haoang Li, Zhitao Deng, Jiafu Yan, Zhenjun Zhao
arXiv preprint, 9 Oct 2024
[arXiv] [Project]
GSLoc: Visual Localization with 3D Gaussian Splatting
Kazii Botashev, Vladislav Pyatov, Gonzalo Ferrer, Stamatios Lefkimmiatis
arXiv preprint, 8 Oct 2024
[arXiv]
LoGS: Visual Localization via Gaussian Splatting with Fewer Training Images
Yuzhou Cheng, Jianhao Jiao, Yue Wang, Dimitrios Kanoulas
arXiv preprint, 15 Oct 2024
[arXiv]
GSORB-SLAM: Gaussian Splatting SLAM benefits from ORB features and Transmittance information
Wancai Zheng, Xinyi Yu, Jintao Rong, Linlin Ou, Yan Wei, Libo Zhou
arXiv preprint, 15 Oct 2024
[arXiv]
AG-SLAM: Active Gaussian Splatting SLAM
Wen Jiang, Boshu Lei, Katrina Ashton, Kostas Daniilidis
arXiv preprint, 22 Oct 2024
[arXiv]
XRDSLAM: A Flexible and Modular Framework for Deep Learning based SLAM
Xiaomeng Wang, Nan Wang, Guofeng Zhang
arXiv preprint, 31 Oct 2024
[arXiv]
LVI-GS: Tightly-coupled LiDAR-Visual-Inertial SLAM using 3D Gaussian Splatting
Huibin Zhao, Weipeng Guan, Peng Lu
arXiv preprint, 5 Nov 2024
[arXiv]
DG-SLAM: Robust Dynamic Gaussian Splatting SLAM with Hybrid Pose Optimization
Yueming Xu, Haochen Jiang, Zhongyang Xiao, Jianfeng Feng, Li Zhang
arXiv preprint, 13 Nov 2024
[arXiv]
MBA-SLAM: Motion Blur Aware Dense Visual SLAM with Radiance Fields Representation
Peng Wang, Lingzhe Zhao, Yin Zhang, Shiyu Zhao, Peidong Liu
arXiv preprint, 13 Nov 2024
[arXiv] [Code]
LiV-GS: LiDAR-Vision Integration for 3D Gaussian Splatting SLAM in Outdoor Environments
Renxiang Xiao, Wei Liu, Yushuai Chen, Liang Hu
19 Nov 2024
[arXiv]
DGS-SLAM: Gaussian Splatting SLAM in Dynamic Environment
Mangyu Kong, Jaewon Lee, Seongwon Lee, Euntai Kim
16 Nov 2024
[arXiv]
HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene Reconstruction
Wei Zhang, Qing Cheng, David Skuddis, Niclas Zeller, Daniel Cremers, Norbert Haala
27 Nov 2024
[arXiv] [Project] [Code]
DROID-Splat: Combining end-to-end SLAM with 3D Gaussian Splatting
Christian Homeyer, Leon Begiristain, Christoph Schnörr
26 Nov 2024
[arXiv] [Code]
PG-SLAM: Photo-realistic and Geometry-aware RGB-D SLAM in Dynamic Environments
Haoang Li, Xiangqi Meng, Xingxing Zuo, Zhe Liu, Hesheng Wang, Daniel Cremers
24 Nov 2024
[arXiv]
Gassidy: Gaussian Splatting SLAM in Dynamic Environments
Long Wen, Shixin Li, Yu Zhang, Yuhong Huang, Jianjie Lin, Fengjunjie Pan, Zhenshan Bing, Alois Knoll
23 Nov 2024
[arXiv]
RGBDS-SLAM: A RGB-D Semantic Dense SLAM Based on 3D Multi Level Pyramid Gaussian Splatting
Zhenzhong Cao, Chenyang Zhao, Qianyi Zhang, Jinzheng Guang, Yinuo Song Jingtai Liu
2 Dec 2024
[arXiv] [Code]
FlashSLAM: Accelerated RGB-D SLAM for Real-Time 3D Scene Reconstruction with Gaussian Splatting
Phu Pham, Damon Conover, Aniket Bera
1 Dec 2024
[arXiv]
MAC-Ego3D: Multi-Agent Gaussian Consensus for Real-Time Collaborative Ego-Motion and Photorealistic 3D Reconstruction
Xiaohao Xu, Feng Xue, Shibo Zhao, Yike Pan, Sebastian Scherer, Xiaonan Huang
12 Dec 2024
[arXiv] [Code]
RP-SLAM: Real-time Photorealistic SLAM with Efficient 3D Gaussian Splatting
Lizhi Bai, Chunqi Tian, Jun Yang, Siyu Zhang, Masanori Suganuma, Takayuki Okatani
13 Dec 2024
[arXiv]
4D Radar-Inertial Odometry based on Gaussian Modeling and Multi-Hypothesis Scan Matching
Fernando Amodeo, Luis Merino, Fernando Caballero
18 Dec 2024
[arXiv] [Code]
DynOMo: Online Point Tracking by Dynamic Online Monocular Gaussian Reconstruction
Jenny Seidenschwarz, Qunjie Zhou, Bardienus Duisterhof, Deva Ramanan, Laura Leal-Taixé
arXiv preprint, 3 Sep 2024
[arXiv]
GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization
Yahao Shi, Yanmin Wu, Chenming Wu, Xing Liu, Chen Zhao, Haocheng Feng, Jingtuo Liu, Liangjun Zhang, Jian Zhang, Bin Zhou, Errui Ding, Jingdong Wang
arXiv preprint, 8 Dec, 2023
[arXiv] [Project]
DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading
Tong Wu, Jia-Mu Sun, Yu-Kun Lai, Yuewen Ma, Leif Kobbelt, Lin Gao
arXiv preprint, 15 Apr 2024
[arXiv]
Progressive Radiance Distillation for Inverse Rendering with Gaussian Splatting
Keyang Ye, Qiming Hou, Kun Zhou
arXiv preprint, 14 Aug 2024
[arXiv]
GS-ID: Illumination Decomposition on Gaussian Splatting via Diffusion Prior and Parametric Light Source Optimization
Kang Du, Zhihao Liang, Zeyu Wang
arXiv preprint, 16 Aug 2024
[arXiv]
Phys3DGS: Physically-based 3D Gaussian Splatting for Inverse Rendering
Euntae Choi, Sungjoo Yoo
arXiv preprint, 16 Sep 2024
[arXiv]
🔥 Flash-Splat: 3D Reflection Removal with Flash Cues and Gaussian Splats
Mingyang Xie, Haoming Cai, Sachin Shah, Yiran Xu, Brandon Y. Feng, Jia-Bin Huang, Christopher A. Metzler
arXiv preprint, 3 Oct 2024
Abstract
We introduce a simple yet effective approach for separating transmitted and reflected light. Our key insight is that the powerful novel view synthesis capabilities provided by modern inverse rendering methods (e.g.,~3D Gaussian splatting) allow one to perform flash/no-flash reflection separation using unpaired measurements -- this relaxation dramatically simplifies image acquisition over conventional paired flash/no-flash reflection separation methods. Through extensive real-world experiments, we demonstrate our method, Flash-Splat, accurately reconstructs both transmitted and reflected scenes in 3D. Our method outperforms existing 3D reflection separation methods, which do not leverage illumination control, by a large margin. Our project webpage is at this https URL.GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering
Hongze Chen, Zehong Lin, Jun Zhang
arXiv preprint, 3 Oct 2024
[arXiv]
RelitLRM: Generative Relightable Radiance for Large Reconstruction Models
Tianyuan Zhang, Zhengfei Kuang, Haian Jin, Zexiang Xu, Sai Bi, Hao Tan, He Zhang, Yiwei Hu, Milos Hasan, William T. Freeman, Kai Zhang, Fujun Luan
arXiv preprint, 8 Oct 2024
[arXiv] [Project]
GS^3: Efficient Relighting with Triple Gaussian Splatting
Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu
SIGGRAPH Asia 2024, 15 Oct 2024
[arXiv] [Project] [Code]
Triplet: Triangle Patchlet for Mesh-Based Inverse Rendering and Scene Parameters Approximation
Jiajie Yang
arXiv preprint, 16 Oct 2024
[arXiv]
GlossyGS: Inverse Rendering of Glossy Objects with 3D Gaussian Splatting
Shuichang Lai, Letian Huang, Jie Guo, Kai Cheng, Bowen Pan, Xiaoxiao Long, Jiangjing Lyu, Chengfei Lv, Yanwen Guo
arXiv preprint, 17 Oct 2024
[arXiv]
SpectroMotion: Dynamic 3D Reconstruction of Specular Scenes
Cheng-De Fan, Chen-Wei Chang, Yi-Ruei Liu, Jie-Ying Lee, Jiun-Long Huang, Yu-Chee Tseng, Yu-Lun Liu
arXiv preprint, 22 Oct 2024
[arXiv] [Project]
GeoSplatting: Towards Geometry Guided Gaussian Splatting for Physically-based Inverse Rendering
Kai Ye, Chong Gao, Guanbin Li, Wenzheng Chen, Baoquan Chen
arXiv preprint, 31 Oct 2024
[arXiv] [Project]
Scaled Inverse Graphics: Efficiently Learning Large Sets of 3D Scenes
Karim Kassab, Antoine Schnepf, Jean-Yves Franceschi, Laurent Caraffa, Flavian Vasile, Jeremie Mary, Andrew Comport, Valérie Gouet-Brunet
arXiv preprint, 31 Oct 2024
[arXiv] [Project]
GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering
Zhihao Liang, Hongdong Li, Kui Jia, Kailing Guo, Qi Zhang
arXiv preprint, 12 Nov 2024
[arXiv]
IRGS: Inter-Reflective Gaussian Splatting with 2D Gaussian Ray Tracing
Chun Gu, Xiaofei Wei, Zixuan Zeng, Yuxuan Yao, Li Zhang
20 Dec 2024
[arXiv] [Project] [Code]
Deblurring 3D Gaussian Splatting
Byeonghyeon Lee, Howoong Lee, Xiangyu Sun, Usman Ali, Eunbyung Park
arXiv preprint, 1 Jan 2024
[arXiv] [Project] [Code]
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
Lingzhe Zhao, Peng Wang, Peidong Liu
arXiv preprint, 18 Mar 2024
[arXiv] [Project] [Code]
Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion
Otto Seiskari, Jerry Ylilammi, Valtteri Kaatrasalo, Pekka Rantalankila, Matias Turkulainen, Juho Kannala, Esa Rahtu, Arno Solin
arXiv preprint, 20 Mar 2024
[arXiv] [Code]
BAGS: Blur Agnostic Gaussian Splatting through Multi-Scale Kernel Modeling
Cheng Peng, Yutao Tang, Yifan Zhou, Nengyu Wang, Xijun Liu, Deming Li, Rama Chellappa
arXiv preprint, 7 Mar 2024
[arXiv]
From Chaos to Clarity: 3DGS in the Dark
Zhihao Li, Yufei Wang, Alex Kot, Bihan Wen
arXiv preprint, 12 Jun 2024
[arXiv]
Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field
Chao Wang, Krzysztof Wolski, Bernhard Kerbl, Ana Serrano, Mojtaba Bemana, Hans-Peter Seidel, Karol Myszkowski, Thomas Leimkühler
arXiv preprint, 11 Jun 2024
[arXiv]
Lighting Every Darkness with 3DGS: Fast Training and Real-Time Rendering for HDR View Synthesis
Xin Jin, Pengyi Jiao, Zheng-Peng Duan, Xingchao Yang, Chun-Le Guo, Bo Ren, Chongyi Li
arXiv preprint, 10 Jun 2024
[arXiv] [Code]
CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images
Junghe Lee, Donghyeong Kim, Dogyoon Lee, Suhwan Cho, Sangyoun Lee
arXiv preprint, 4 Jul 2024
[arXiv] [Project]
HDRSplat: Gaussian Splatting for High Dynamic Range 3D Scene Reconstruction from Raw Images
Shreyas Singh, Aryan Garg, Kaushik Mitra
arXiv preprint, 23 Jul 2024
[arXiv]
EaDeblur-GS: Event assisted 3D Deblur Reconstruction with Gaussian Splatting
Yuchen Weng, Zhengwen Shen, Ruofan Chen, Qi Wang, Jun Wang
arXiv preprint, 18 Jul 2024
[arXiv]
HDRGS: High Dynamic Range Gaussian Splatting
Jiahao Wu, Lu Xiao, Chao Wang, Rui Peng, Kaiqiang Xiong, Ronggang Wang
arXiv preprint, 13 Aug 2024
[arXiv]
DeRainGS: Gaussian Splatting for Enhanced Scene Reconstruction in Rainy Environments
Shuhong Liu, Xiang Chen, Hongming Chen, Quanfeng Xu, Mingrui Li
arXiv preprint, 21 Aug 2024
[arXiv]
Gaussian in the Dark: Real-Time View Synthesis From Inconsistent Dark Images Using Gaussian Splatting
Sheng Ye, Zhen-Hui Dong, Yubin Hu, Yu-Hui Wen, Yong-Jin Liu
PG 2024, 17 Aug 2024
[arXiv]
GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring
Dongwoo Lee, Joonkyu Park, Kyoung Mu Lee
NeurIPS 2024, 31 Oct 2024
[arXiv]
CoCoGaussian: Leveraging Circle of Confusion for Gaussian Splatting from Defocused Images
Jungho Lee, Suhwan Cho, Taeoh Kim, Ho - Deok Jang, Minhyeok Lee, Geonho Cha, Dongyoon Wee, Dogyoon Lee, Sangyoun Lee
20 Dec 2024
[arXiv] [Project] [Code]
Snap-it, Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for Reconstructing Challenging Surfaces
Mauro Comi, Alessio Tonioni, Max Yang, Jonathan Tremblay, Valts Blukis, Yijiong Lin, Nathan F. Lepora, Laurence Aitchison
arXiv preprint, 29 Mar 2024
[arXiv]
Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting
Jiarui Meng, Haijie Li, Yanmin Wu, Qiankun Gao, Shuzhou Yang, Jian Zhang, Siwei Ma
arXiv preprint, 1 Apr 2024
[arXiv]
RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering
Xianqiang Lyu, Hui Liu, Junhui Hou
arXiv preprint, 17 Apr 2024
[arXiv]
DeblurGS: Gaussian Splatting for Camera Motion Blur
Jeongtaek Oh, Jaeyoung Chung, Dongwoo Lee, Kyoung Mu Lee
arXiv preprint, 17 Apr 2024
[arXiv]
3D Gaussian Splatting with Deferred Reflection
Keyang Ye, Qiming Hou, Kun Zhou
arXiv preprint, 29 Apr 2024
[arXiv]
MirrorGaussian: Reflecting 3D Gaussians for Reconstructing Mirror Reflections
Jiayue Liu, Xiao Tang, Freeman Cheng, Roy Yang, Zhihao Li, Jianzhuang Liu, Yi Huang, Jiaqi Lin, Shiyong Liu, Xiaofei Wu, Songcen Xu, Chun Yuan
arXiv preprint, 20 May 2024
[arXiv] [Project]
DC-Gaussian: Improving 3D Gaussian Splatting for Reflective Dash Cam Videos
Linhan Wang, Kai Cheng, Shuo Lei, Shengkun Wang, Wei Yin, Chenyang Lei, Xiaoxiao Long, Chang-Tien Lu
arXiv preprint, 27 May 2024
[arXiv] [Project] [Code]
RefGaussian: Disentangling Reflections from 3D Gaussian Splatting for Realistic Rendering
Rui Zhang, Tianyue Luo, Weidong Yang, Ben Fei, Jingyi Xu, Qingyuan Zhou, Keyi Liu, Ying He
arXiv preprint, 9 Jun 2024
[arXiv]
Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization
Zihan Wang, Shuzhe Wang, Matias Turkulainen, Junyuan Fang, Juho Kannala
arXiv preprint, 2 Oct 2024
[arXiv]
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, Marcus Magnor
arXiv preprint, 10 Oct 2024
[arXiv] [Project]
SRGS: Super-Resolution 3D Gaussian Splatting
Xiang Feng, Yongbo He, Yubo Wang, Yan Yang, Zhenzhong Kuang, Yu Jun, Jianping Fan, Jiajun ding
ACM MM 2024, 16 Apr 2024
[arXiv]
GaussianSR: 3D Gaussian Super-Resolution with 2D Diffusion Priors
Xiqian Yu, Hanxin Zhu, Tianyu He, Zhibo Chen
arXiv preprint, 14 Jun 2024
[arXiv] [Project]
GaussianSR: High Fidelity 2D Gaussian Splatting for Arbitrary-Scale Image Super-Resolution
Jintong Hu, Bin Xia, Bin Chen, Wenming Yang, Lei Zhang
arXiv preprint, 25 Jul 2024
[arXiv]
SuperGS: Super-Resolution 3D Gaussian Splatting via Latent Feature Field and Gradient-guided Splitting
Shiyun Xie, Zhiru Wang, Yinghao Zhu, Chengwei Pan
arXiv preprint, 3 Oct 2024
[arXiv]
Zero-shot Point Cloud Completion Via 2D Priors
Tianxin Huang, Zhiwen Yan, Yuyang Zhao, Gim Hee Lee
arXiv preprint, 10 Apr 2024
[arXiv]
Photorealistic 3D Urban Scene Reconstruction and Point Cloud Extraction using Google Earth Imagery and Gaussian Splatting
Kyle Gao, Dening Lu, Hongjie He, Linlin Xu, Jonathan Li
arXiv preprint, 17 May 2024
[arXiv]
PFGS: High Fidelity Point Cloud Rendering via Feature Splatting
Jiaxu Wang, Ziyi Zhang, Junhao He, Renjing Xu
arXiv preprint, 4 Jul 2024
[arXiv]
GaussianPainter: Painting Point Cloud into 3D Gaussians with Normal Guidance
Jingqiu Zhou, Lue Fan, Xuesong Chen, Linjiang Huang, Si Liu, Hongsheng Li
AAAI 2025, 23 Dec 2024
[arXiv]
3DGS-Calib: 3D Gaussian Splatting for Multimodal SpatioTemporal Calibration
Quentin Herau, Moussab Bennehar, Arthur Moreau, Nathan Piasco, Luis Roldao, Dzmitry Tsishkou, Cyrille Migniot, Pascal Vasseur, Cédric Demonceaux
arXiv preprint, 18 Mar 2024
[arXiv]
GaussReg: Fast 3D Registration with Gaussian Splatting
Jiahao Chang, Yinglin Xu, Yihao Li, Yuantao Chen, Xiaoguang Han
ECCV 2024, 7 Jul 2024
[arXiv]
Dual-Camera Smooth Zoom on Mobile Phones
Renlong Wu, Zhilu Zhang, Yu Yang, Wangmeng Zuo
arXiv preprint, 7 Apr 2024
[arXiv]
Event3DGS: Event-based 3D Gaussian Splatting for Fast Egomotion
Tianyi Xiong, Jiayi Wu, Botao He, Cornelia Fermuller, Yiannis Aloimonos, Heng Huang, Christopher A. Metzler
arXiv preprint, 5 Jun 2024
[arXiv]
E2GS: Event Enhanced Gaussian Splatting
Hiroyuki Deguchi, Mana Masuda, Takuya Nakabayashi, Hideo Saito
arXiv preprint, 21 Jun 2024
[arXiv]
SpikeGS: Reconstruct 3D scene via fast-moving bio-inspired sensors
Yijia Guo, Liwen Hu, Lei Ma, Tiejun Huang
arXiv preprint, 4 Jul 2024
[arXiv]
SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion
Jiyuan Zhang, Kang Chen, Shiyan Chen, Yajing Zheng, Tiejun Huang, Zhaofei Yu
arXiv preprint, 14 Jul 2024
[arXiv]
Ev-GS: Event-based Gaussian splatting for Efficient and Accurate Radiance Field Rendering
Jingqian Wu, Shuo Zhu, Chutian Wang, Edmund Y. Lam
arXiv preprint, 16 Jul 2024
[arXiv]
IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera
Jian Huang, Chengrui Dong, Peidong Liu
arXiv preprint, 10 Oct 2024
[arXiv] [Code]
EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting
Bohao Liao, Wei Zhai, Zengyu Wan, Tianzhu Zhang, Yang Cao, Zheng-Jun Zha
arXiv preprint, 20 Oct 2024
[arXiv] [Project]
EventSplat: 3D Gaussian Splatting from Moving Event Cameras for Real-time Rendering
Toshiya Yura, Ashkan Mirzaei, Igor Gilitschenski
10 Dec 2024
[arXiv]
SweepEvGS: Event-Based 3D Gaussian Splatting for Macro and Micro Radiance Field Rendering from a Single Sweep
Jingqian Wu, Shuo Zhu, Chutian Wang, Boxin Shi, Edmund Y. Lam
16 Dec 2024
[arXiv]
Advancing Extended Reality with 3D Gaussian Splatting: Innovations and Prospects
Shi Qiu, Binzhu Xie, Qixuan Liu, Pheng-Ann Heng
9 Dec 2024
[arXiv]
Exploring the Feasibility of Generating Realistic 3D Models of Endangered Species Using DreamGaussian: An Analysis of Elevation Angle's Impact on Model Generation
Selcuk Anil Karatopak, Deniz Sen
arXiv preprint, 15 Dec 2023
[arXiv]
EndoGaussian: Gaussian Splatting for Deformable Surgical Scene Reconstruction
Yifan Liu, Chenxin Li, Chen Yang, Yixuan Yuan
arXiv preprint, 23 Jan 2024
[arXiv] [Project] [Code]
Deformable Endoscopic Tissues Reconstruction with Gaussian Splatting
Lingting Zhu, Zhao Wang, Zhenchao Jin, Guying Lin, Lequan Yu
arXiv preprint, 21 Jan 2024
[arXiv] [Code]
Endo-4DGS: Distilling Depth Ranking for Endoscopic Monocular Scene Reconstruction with 4D Gaussian Splatting
Yiming Huang, Beilei Cui, Long Bai, Ziqi Guo, Mengya Xu, Hongliang Ren
arXiv preprint, 29 Jan 2024
[arXiv]
Radiative Gaussian Splatting for Efficient X-ray Novel View Synthesis
Yuanhao Cai, Yixun Liang, Jiahao Wang, Angtian Wang, Yulun Zhang, Xiaokang Yang, Zongwei Zhou, Alan Yuille
arXiv preprint, 7 Mar 2024
[arXiv] [Video]
TOGS: Gaussian Splatting with Temporal Opacity Offset for Real-Time 4D DSA Rendering
Shuai Zhang, Huangxuan Zhao, Zhenghong Zhou, Guanjun Wu, Chuansheng Zheng, Xinggang Wang, Wenyu Liu
arXiv preprint, 28 Mar 2024
[arXiv]
Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction
Sierra Bonilla, Shuai Zhang, Dimitrios Psychogyios, Danail Stoyanov, Francisco Vasconcelos, Sophia Bano
arXiv preprint, 9 Apr 2024
[arXiv]
Novel View Synthesis for Cinematic Anatomy on Mobile and Immersive Displays
Simon Niedermayr, Christoph Neuhauser, Kaloian Petkov, Klaus Engel, Rüdiger Westermann
arXiv preprint, 17 Apr 2024
[arXiv]
HFGS: 4D Gaussian Splatting with Emphasis on Spatial and Temporal High-Frequency Components for Endoscopic Scene Reconstruction
Haoyu Zhao, Xingyue Zhao, Lingting Zhu, Weixi Zheng, Yongchao Xu
arXiv preprint, 28 May 2024
[arXiv]
Deform3DGS: Flexible Deformation for Fast Surgical Scene Reconstruction with Gaussian Splatting
Shuojue Yang, Qian Li, Daiyun Shen, Bingchen Gong, Qi Dou, Yueming Jin
arXiv preprint, 28 May 2024
[arXiv] [Code]
R^2-Gaussian: Rectifying Radiative Gaussian Splatting for Tomographic Reconstruction
Ruyi Zha, Tao Jun Lin, Yuanhao Cai, Jiwen Cao, Yanhao Zhang, Hongdong Li
arXiv preprint, 31 May 2024
[arXiv]
DDGS-CT: Direction-Disentangled Gaussian Splatting for Realistic Volume Rendering
Zhongpai Gao, Benjamin Planche, Meng Zheng, Xiao Chen, Terrence Chen, Ziyan Wu
arXiv preprint, 4 Jun 2024
[arXiv]
Gaussian Representation for Deformable Image Registration
Jihe Li, Fabian Zhang, Xia Li, Tianhao Zhang, Ye Zhang, Joachim Buhmann
arXiv preprint, 5 Jun 2024
[arXiv]
LGS: A Light-weight 4D Gaussian Splatting for Efficient Surgical Scene Reconstruction
Hengyu Liu, Yifan Liu, Chenxin Li, Wuyang Li, Yixuan Yuan
MICCAI 2024, 23 Jun 2024
[arXiv] [Project] [Code]
Free-SurGS: SfM-Free 3D Gaussian Splatting for Surgical Scene Reconstruction
Jiaxin Guo, Jiangliu Wang, Di Kang, Wenzhen Dong, Wenting Wang, Yun-hui Liu
MICCAI 2024, 3 Jul 2024
[arXiv] [Code]
EndoSparse: Real-Time Sparse View Synthesis of Endoscopic Scenes using Gaussian Splatting
Chenxin Li, Brandon Y. Feng, Yifan Liu, Hengyu Liu, Cheng Wang, Weihao Yu, Yixuan Yuan
MICCAI 2024, 1 Jul 2024
[arXiv] [Project]
SurgicalGaussian: Deformable 3D Gaussians for High-Fidelity Surgical Scene Reconstruction
Weixing Xie, Junfeng Yao, Xianpeng Cao, Qiqin Lin, Zerui Tang, Xiao Dong, Xiaohu Guo
arXiv preprint, 6 Jul 2024
[arXiv] [Project] [Video] [Code]
Realistic Surgical Image Dataset Generation Based On 3D Gaussian Splatting
Tianle Zeng, Gerardo Loza Galindo, Junlei Hu, Pietro Valdastri, Dominic Jones
MICCAI 2024, 20 Jul 2024
[arXiv]
A Review of 3D Reconstruction Techniques for Deformable Tissues in Robotic Surgery
Mengya Xu, Ziqi Guo, An Wang, Long Bai, Hongliang Ren
MICCAI 2024, 8 Aug 2024
[arXiv] [Code]
Free-DyGS: Camera-Pose-Free Scene Reconstruction based on Gaussian Splatting for Dynamic Surgical Videos
Qian Li, Shuojue Yang, Daiyun Shen, Yueming Jin
arXiv preprint, 2 Sep 2024
[arXiv]
Online 3D reconstruction and dense tracking in endoscopic videos
Michel Hayoz, Christopher Hahne, Thomas Kurmann, Max Allan, Guido Beldi, Daniel Candinas, ablo Márquez-Neila, Raphael Sznitman
arXiv preprint, 9 Sep 2024
[arXiv]
Seamless Augmented Reality Integration in Arthroscopy: A Pipeline for Articular Reconstruction and Guidance
Hongchao Shu, Mingxu Liu, Lalithkumar Seenivasan, Suxi Gu, Ping-Cheng Ku, Jonathan Knopf, Russell Taylor, Mathias Unberath
AE-CAI 2024, 1 Oct 2024
[arXiv]
Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization
Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth
arXiv preprint, 22 Oct 2024
[arXiv]
TomoGRAF: A Robust and Generalizable Reconstruction Network for Single-View Computed Tomography
Di Xu, Yang Yang, Hengjie Liu, Qihui Lyu, Martina Descovich, Dan Ruan, Ke Sheng
arXiv preprint, 12 Nov 2024
[arXiv]
PR-ENDO: Physically Based Relightable Gaussian Splatting for Endoscopy
Joanna Kaleta, Weronika Smolak-Dyżewska, Dawid Malarz, Diego Dall'Alba, Przemysław Korzeniowski, Przemysław Spurek
19 Nov 2024
[arXiv]
RecGS: Removing Water Caustic with Recurrent Gaussian Splatting
Tianyi Zhang, Weiming Zhi, Kaining Huang, Joshua Mangelson, Corina Barbalata, Matthew Johnson-Roberson
arXiv preprint, 14 Jul 2024
[arXiv]
WaterSplatting: Fast Underwater 3D Scene Reconstruction Using Gaussian Splatting
Huapeng Li, Wenxuan Song, Tianao Xu, Alexandre Elsig, Jonas Kulhanek
arXiv preprint, 15 Aug 2024
[arXiv] [Project] [Code]
SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model
Daniel Yang, John J. Leonard, Yogesh Girdhar
arXiv preprint, 25 Sep 2024
[arXiv] [Project] [Video]
UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction
Haoran Wang, Nantheera Anantrasirichai, Fan Zhang, David Bull
arXiv preprint, 2 Oct 2024
[arXiv]
NeuroPump: Simultaneous Geometric and Color Rectification for Underwater Images
Yue Guo, Haoxiang Liao, Haibin Ling, Bingyao Huang
20 Dec 2024
[arXiv]
Comparative Analysis of Novel View Synthesis and Photogrammetry for 3D Forest Stand Reconstruction and extraction of individual tree parameters
Guoji Tian, Chongcheng Chen, Hongyu Huang
arXiv preprint, 8 Oct 2024
[arXiv]
Biomass phenotyping of oilseed rape through UAV multi-view oblique imaging with 3DGS and SAM model
Yutao Shen, Hongyu Zhou, Xin Yang, Xuqi Lu, Ziyue Guo, Lixi Jiang, Yong He, Haiyan Cen
arXiv preprint, 13 Nov 2024
[arXiv]
WRF-GS: Wireless Radiation Field Reconstruction with 3D Gaussian Splatting
Chaozheng Wen, Jingwen Tong, Yingdong Hu, Zehong Lin, Jun Zhang
INFOCOM 2025, 6 Dec 2024
[arXiv]
SplatPose & Detect: Pose-Agnostic 3D Anomaly Detection
Mathis Kruse, Marco Rudolph, Dominik Woiwode, Bodo Rosenhahn
CVPR 2024, 10 Apr 2024
[arXiv]
SplatPose+: Real-time Image-Based Pose-Agnostic 3D Anomaly Detection
Yizhe Liu, Yan Song Hu, Yuhao Chen, John Zelek
arXiv preprint, 15 Oct 2024
[arXiv]
SplatOverflow: Asynchronous Hardware Troubleshooting
Amritansh Kwatra, Tobias Wienberg, Ilan Mandel, Ritik Batra, Peter He, Francois Guimbretiere, Thijs Roumen
arXiv preprint, 4 Nov 2024
[arXiv] [Video]
GaussianStego: A Generalizable Stenography Pipeline for Generative 3D Gaussians Splatting
Chenxin Li, Hengyu Liu, Zhiwen Fan, Wuyang Li, Yifan Liu, Panwang Pan, Yixuan Yuan
arXiv preprint, 1 Jul 2024
[arXiv] [Project]
Poison-splat: Computation Cost Attack on 3D Gaussian Splatting
Jiahao Lu, Yifan Zhang, Qiuhong Shen, Xinchao Wang, Shuicheng Yan
arXiv preprint, 10 Oct 2024
[arXiv] [Code]
GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting
Xiufeng Huang, Ruiqi Li, Yiu-ming Cheung, Ka Chun Cheung, Simon See, Renjie Wan
arXiv preprint, 31 Oct 2024
[arXiv]
Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images
Qi Song, Ziyuan Luo, Ka Chun Cheung, Simon See, Renjie Wan
NeurIPS 2024, 30 Oct 2024
[arXiv]
Towards More Accurate Fake Detection on Images Generated from Advanced Generative and Neural Rendering Models
Chengdong Dong, Vijayakumar Bhagavatula, Zhenyu Zhou, Ajay Kumar
arXiv preprint, 13 Nov 2024
[arXiv]
GuardSplat: Efficient and Robust Watermarking for 3D Gaussian Splatting
Zixuan Chen, Guangcong Wang, Jiahao Zhu, Jianhuang Lai, Xiaohua Xie
29 Nov 2024
[arXiv] [Project] [Code]
Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting
Yijia Guo, Wenkai Huang, Yang Li, Gaolei Li, Hang Zhang, Liwen Hu, Jianhua Li, Tiejun Huang, Lei Ma
4 Dec 2024
[arXiv] [Project]
WATER-GS: Toward Copyright Protection for 3D Gaussian Splatting via Universal Watermarking
Yuqi Tan, Xiang Liu, Shuzhao Xie, Bin Chen, Shu-Tao Xia, Zhi Wang
7 Dec 2024
[arXiv]
DRAGON: Drone and Ground Gaussian Splatting for 3D Building Reconstruction
Yujin Ham, Mateusz Michalkiewicz, Guha Balakrishnan
ICCP 2024, 1 Jul 2024
[arXiv]
Developing Smart MAVs for Autonomous Inspection in GPS-denied Constructions
Paoqiang Pan, Kewei Hu, Xiao Huang, Wei Ying, Xiaoxuan Xie, Yue Ma, Naizhong Zhang, Hanwen Kang
arXiv preprint, 12 Aug 2024
[arXiv]
Video2BEV: Transforming Drone Videos to BEVs for Video-based Geo-localization
Hao Ju, Zhedong Zheng
20 Nov 2024
[arXiv]
Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes
Lihan Jiang, Kerui Ren, Mulin Yu, Linning Xu, Junting Dong, Tao Lu, Feng Zhao, Dahua Lin, Bo Dai
2 Dec 2024
[arXiv]
Extrapolated Urban View Synthesis Benchmark
Xiangyu Han, Zhen Jia, Boyi Li, Yan Wang, Boris Ivanovic, Yurong You, Lingjie Liu, Yue Wang, Marco Pavone, Chen Feng, Yiming Li
6 Dec 2024
[arXiv] [Project] [Code]
SOUS VIDE: Cooking Visual Drone Navigation Policies in a Gaussian Splatting Vacuum
JunEn Low, Maximilian Adang, Javier Yu, Keiko Nagami, Mac Schwager
20 Dec 2024
[arXiv]
Reconstructing Satellites in 3D from Amateur Telescope Images
Zhiming Chang, Boyang Liu, Yifei Xia, Youming Guo, Boxin Shi, He Sun
arXiv preprint, 29 Apr 2024
[arXiv]
SatSplatYOLO: 3D Gaussian Splatting-based Virtual Object Detection Ensembles for Satellite Feature Recognition
Van Minh Nguyen, Emma Sandidge, Trupti Mahendrakar, Ryan T. White
arXiv preprint, 4 Jun 2024
[arXiv]
Embracing Radiance Field Rendering in 6G: Over-the-Air Training and Inference with 3D Contents
Guanlin Wu, Zhonghao Lyu, Juyong Zhang, Jie Xu
arXiv preprint, 20 May 2024
[arXiv]
AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis
Swapnil Bhosale, Haosen Yang, Diptesh Kanojia, Jiankang Deng, Xiatian Zhu
arXiv preprint, 13 Jun 2024
[arXiv]
LayerPano3D: Layered 3D Panorama for Hyper-Immersive Scene Generation
Shuai Yang, Jing Tan, Mengchen Zhang, Tong Wu, Yixuan Li, Gordon Wetzstein, Ziwei Liu, Dahua Lin
arXiv preprint, 23 Aug 2024
[arXiv] [Project]
Pano2Room: Novel View Synthesis from a Single Indoor Panorama
Guo Pu, Yiming Zhao, Zhouhui Lian
Siggraph Asia 2024, 21 Aug 2024
[arXiv] [Code]
🔥 Thermal3D-GS: Physics-induced 3D Gaussians for Thermal Infrared Novel-view Synthesis
Qian Chen, Shihao Shu, Xiangzhi Bai
ECCV 2024, 12 Sep 2024
Abstract
Novel-view synthesis based on visible light has been extensively studied. In comparison to visible light imaging, thermal infrared imaging offers the advantage of all-weather imaging and strong penetration, providing increased possibilities for reconstruction in nighttime and adverse weather scenarios. However, thermal infrared imaging is influenced by physical characteristics such as atmospheric transmission effects and thermal conduction, hindering the precise reconstruction of intricate details in thermal infrared scenes, manifesting as issues of floaters and indistinct edge features in synthesized images. To address these limitations, this paper introduces a physics-induced 3D Gaussian splatting method named Thermal3D-GS. Thermal3D-GS begins by modeling atmospheric transmission effects and thermal conduction in three-dimensional media using neural networks. Additionally, a temperature consistency constraint is incorporated into the optimization objective to enhance the reconstruction accuracy of thermal infrared images. Furthermore, to validate the effectiveness of our method, the first large-scale benchmark dataset for this field named Thermal Infrared Novel-view Synthesis Dataset (TI-NSD) is created. This dataset comprises 20 authentic thermal infrared video scenes, covering indoor, outdoor, and UAV(Unmanned Aerial Vehicle) scenarios, totaling 6,664 frames of thermal infrared image data. Based on this dataset, this paper experimentally verifies the effectiveness of Thermal3D-GS. The results indicate that our method outperforms the baseline method with a 3.03 dB improvement in PSNR and significantly addresses the issues of floaters and indistinct edge features present in the baseline method. Our dataset and codebase will be released in \href{this https URL}{\textcolor{red}{Thermal3DGS}}.ThermalGaussian: Thermal 3D Gaussian Splatting
Rongfeng Lu, Hangyu Chen, Zunjie Zhu, Yuhang Qin, Ming Lu, Le Zhang, Chenggang Yan, Anke Xue
arXiv preprint, 11 Sep 2024
[arXiv]
Fisheye-GS: Lightweight and Extensible Gaussian Splatting Module for Fisheye Cameras
Zimu Liao, Siyan Chen, Rong Fu, Yi Wang, Zhongling Su, Hao Luo, Li Ma, Linning Xu, Bo Dai, Hengjie Li, Zhilin Pei, Xingcheng Zhang
arXiv preprint, 7 Sep 2024
[arXiv]
SCIGS: 3D Gaussians Splatting from a Snapshot Compressive Image
Zixu Wang, Hao Yang, Yu Guo, Fei Wang
19 Nov 2024
[arXiv]
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
Haian Jin, Hanwen Jiang, Hao Tan, Kai Zhang, Sai Bi, Tianyuan Zhang, Fujun Luan, Noah Snavely, Zexiang Xu
arXiv preprint, 22 Oct 2024
[arXiv] [Project]
Exploring Dynamic Novel View Synthesis Technologies for Cinematography
Adrian Azzarelli, Nantheera Anantrasirichai, David R Bul
23 Dec 2024
[arXiv]
FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent
Cameron Smith, David Charatan, Ayush Tewari, Vincent Sitzmann
arXiv preprint, 23 Apr 2024
[arXiv]
Learning-based Multi-View Stereo: A Survey
Fangjinhua Wang, Qingtian Zhu, Di Chang, Quankai Gao, Junlin Han, Tong Zhang, Richard Hartley, Marc Pollefeys
arXiv preprint, 27 Aug 2024
[arXiv]
Thanks to the community and hoping more and more people are joining us and submit commits and PRs!
Made with contributors-img.
CC-0