The following papers are the papers that fits my PhD research interest. They are mainly tackling the real-time reconstruction of multiplle dynamic objects or non-rigid objects. Some papers are off-line work yet also very interesting.
Due to my personal interests, SFM-based or factorization-based work are not collected here. Those papers can be found here.
Another related paper list is about the semantic segmentation, which can be found here.
- State of the Art in Real-time Registration of RGB-D Images (link)
- Visual SLAM and Structure from Motion in Dynamic Environments: A Survey (link)
- State of the Art on 3D Reconstruction with RGB-D Cameras (link)
-
Joint 3D Reconstruction of a Static Scene and Moving Objects (3DV 2017)
-
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects (ICRA 2017)
- http://visual.cs.ucl.ac.uk/pubs/cofusion/ (website)
- https://github.com/martinruenz/co-fusion (code)
- http://visual.cs.ucl.ac.uk/pubs/cofusion/icra2017_co-fusion_print.pdf (paper)
- Semantic segmentation is pre-computed
-
MaskFusion (submitted to ISMAR 2018)
-
Relevant: SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks (ICRA 2017)
-
Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)
- paper: https://arxiv.org/abs/1804.04259
- slides: http://on-demand.gputechconf.com/gtc/2018/presentation/s8798-learning-rigidity-in-dynamic-scenes-for-scene-flow-estimation-v2.pdf
- video: https://www.youtube.com/watch?v=MnTHkOCY790&feature=youtu.be
- web: http://research.nvidia.com/publication/2018-09_Learning-Rigidity-in
-
SfM-Net: Learning of Structure and Motion from Video
-
Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving (ECCV2018)
-
Estimating metric poses of dynamic objects using monocular visual-inertial fusion (IROS 2018)
-
Real-Time Object Pose Estimation with Pose Interpreter Networks (IROS2018)
http://www.liuyebin.com/4d.html
- DynamicFusion: Reconstruction and Tracking of Non-rigid Scenes in Real-Time (CVPR 2015)
- 3D Scanning Deformable Objects with a Single RGBD Sensor (CVPR 2015)
- VolumeDeform: Real-time Volumetric Non-rigid Reconstruction (ECCV2016)
- Fusion4D: Real-time Performance Capture of Challenging Scenes (Siggraph 2016)
- Real-time Geometry, Albedo and Motion Reconstruction Using a Single RGBD Camera (Siggraph 2017)
- KillingFusion: Non-rigid 3D Reconstruction without Correspondences (CVPR 2017)
- SobolevFusion: 3D Reconstruction of Scenes Undergoing Free Non-rigid Motion (CVPR 2018)
- MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects (TVCG 2017)
- DoubleFusion: Real-time Capture of Human Performances with Inner Body Shapes from a Single Depth Sensor (CVPR 2018)
- Fast Odometry and Scene Flow from RGB-D Cameras based on Geometric Clustering (ICRA 2017)
- StaticFusion: Background Reconstruction for Dense RGB-D SLAM in Dynamic Environments (ICRA 2018)
- Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments (ICRA 2018)
- Detecting, Tracking and Eliminating Dynamic Objects in 3D Mapping using Deep Learning and Inpainting (ICRA 2018)
- Video Pop-up : Monocular 3D Reconstruction of Dynamic Scenes SCENE RECONSTRUCTION WITH AN ADAPTIVE NEIGHBOURHOOD (ECCV 2014)
- Robust Non-rigid Motion Tracking and Surface Reconstruction Using L0 Regularization (ICCV 2015)
- http://media.au.tsinghua.edu.cn/liuyebin_files/nonrigid/nonrigid.pdf (paper)
- Code available by sending mail to the author
- http://media.au.tsinghua.edu.cn/nonrigid.html (website)
- Dense Multibody Motion Estimation and Reconstruction from a Handheld Camera (ISMAR 2012)
- Robust Dense Mapping for Large-Scale Dynamic Environments (ICRA 2018)
- http://siegedog.com/dynslam/ (website)
- http://siegedog.com/assets/dynslam/robust-dense-mapping-paper-submission.pdf (paper)
- https://github.com/AndreiBarsan/DynSLAM (code)
- pre-compute the semantic segmentation (cars)
- Multimotion Visual Odometry (MVO): Simultaneous Estimation of Camera and Third-Party Motions (IROS 2018)
-
DeMoN: Depth and Motion Network for Learning Monocular Stereo
-
Unsupervised Learning of Depth and Ego-Motion from Video
-
Multi-view Supervision for Single-view Reconstructionvia Differentiable Ray Consistency
- OctNet: Learning Deep 3D Representations at High Resolutions