Intelligent Robotics and Machine Vision Lab
Intelligent Robotics and Machine Vision Lab at Shanghai Jiao Tong University
China
Pinned Repositories
3DFlow
Codes for ECCV2022 paper "What matters in supervised 3D scene flow"
DifFlow3D
[CVPR 2024] DifFlow3D: Toward Robust Uncertainty-Aware Scene Flow Estimation with Iterative Diffusion-Based Refinement
EfficientLO-Net
EfficientLO-Net: Efficient 3D Deep LiDAR Odometry (PAMI 2022)
HALFlow
Codes for TIP2021 paper "Hierarchical Attention Learning of Scene Flow in 3D Point Clouds"
Point-Mamba
Point Mamba
PWCLONet
Codes for CVPR2021 paper "PWCLO-Net: Deep LiDAR Odometry in 3D Point Clouds Using Hierarchical Embedding Mask Optimization"
RegFormer
[ICCV2023]RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration
RLSAC
This is the official code repository for the ICCV2023 paper RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation.
SNI-SLAM
[CVPR'24] SNI-SLAM: Semantic Neural Implicit SLAM
TransLO
Codes for AAAI2023 paper "TransLO: A Window-Based Masked Point Transformer Framework for Large-Scale LiDAR Odometry"
Intelligent Robotics and Machine Vision Lab's Repositories
IRMVLab/Point-Mamba
Point Mamba
IRMVLab/RegFormer
[ICCV2023]RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration
IRMVLab/SNI-SLAM
[CVPR'24] SNI-SLAM: Semantic Neural Implicit SLAM
IRMVLab/DifFlow3D
[CVPR 2024] DifFlow3D: Toward Robust Uncertainty-Aware Scene Flow Estimation with Iterative Diffusion-Based Refinement
IRMVLab/TransLO
Codes for AAAI2023 paper "TransLO: A Window-Based Masked Point Transformer Framework for Large-Scale LiDAR Odometry"
IRMVLab/LHMap-loc
Source code of LHMap-loc
IRMVLab/RLSAC
This is the official code repository for the ICCV2023 paper RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation.
IRMVLab/DSLO
Codes for paper "DSLO: Deep Sequence LiDAR Odometry Based on Inconsistent Spatio-temporal Propagation".
IRMVLab/I2PNet
Codes for "End-to-end 2D-3D Registration between Image and LiDAR Point Cloud for Vehicle Localization"
IRMVLab/DELFlow
[ICCV 2023] DELFlow: Dense Efficient Learning of Scene Flow for Large-Scale Point Clouds
IRMVLab/DVLO
[ECCV 2024] DVLO: Deep Visual-LiDAR Odometry with Local-to-Global Feature Fusion and Bi-Directional Structure Alignment
IRMVLab/Pseudo-LiDAR-for-Visual-Odometry
IRMVLab/3DUnMonoFlow
Codes for ICRA2021 paper "Unsupervised Learning of 3D Scene Flow from Monocular Camera"
IRMVLab/BCLearning
Codes of TNNLS2022 paper "Learning of Long-Horizon Sparse-Reward Robotic Manipulator Tasks with Base Controllers"
IRMVLab/DDS-SLAM
official code for DDS-SLAM: Dense Semantic Neural SLAM for Deforming Endoscopic Scenes
IRMVLab/e2e-NeRF-nav
IRMVLab/3DSF-PL
3D Scene Flow Estimation on Pseudo-LiDAR: Bridging the Gap on Estimating Point Motion (TII 2022)
IRMVLab/InterMOT
Interactive Multi-scale Fusion of 2D and 3D Features for Multi-object Tracking (TITS 2023)
IRMVLab/soft-nerf
official code for SoftNeRF: A self-modeling soft robot plugin for various tasks
IRMVLab/Warehousing_Robots_Simulator
IRMVLab/Diff-IP2D
IRMVLab/LNI-SLAM
Official code for LNI-SLAM: Neural Implicit SLAM with Lines
IRMVLab/LTFNet
IRMVLab/NUE-NeRF-nav
IRMVLab/SDFPlane
(MICCAI2024)SDFPlane: Explicit Neural Surface Reconstruction of Deformable Tissues
IRMVLab/FTM-nav
ECCV 2024
IRMVLab/RegFormerV2
This is the extension version of ICCV2023 paper 'RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration'
IRMVLab/Dataset-Soft
Dataset for 3D tip force estimation of the cable-driven soft robot
IRMVLab/ENI-SLAM
Neural Implicit SLAM for Endoscopy
IRMVLab/PLPE-Depth
Codes of ICRA2023 paper "Self-supervised Multi-frame Monocular Depth Estimation with Pseudo-LiDAR Pose Enhancement"