Our system is developed on the basis of the state-of-the-art VINS-Fusion, which contains visual-inertial odometry (vins_estimator), pose graph optimization, and loop closure detection (loop_fusion). In this work, our proposed loop closure detection approach is used to replace VINS-Loop to improve robustness against viewpoint changes. The flow diagram, illustrating three main stages and the pipeline, is shown as below:.
Firstly, keyframes from vins_estimator are processed to extract the required SuperPoint descriptors. Secondly, a loop candidate is retrieved from the database based on the offline-trained visual vocabulary. Thirdly, the SuperGlue model is applied to find feature correspondences, which are then used in relative pose computation. Loop detection is finally predicted by examining the number of correspondences and the relative pose.
Features:
- visual loop closure detection approach robust against viewpoint differences
- combining deep learning models with state-of-the-art SLAM framework
Authors: Yutong Wang, Bin Xu, Wei Fan, Changle Xiang from the School of Mechanical Engineering, Beijing Institute of Technology.
Related Paper:
- Wang, Yutong, et al. "A Robust and Efficient Loop Closure Detection Approach for Hybrid Ground/Aerial Vehicles." Drones 7.2 (2023): 135.
Ubuntu 64-bit 18.04. ROS Melodic. ROS Installation
Follow Ceres Installation.
Libtorch can be compiled from source code, or you can simply download the precompiled versions. The latter is recommended. There are both CPU and GPU versions:
- CPU version: we have tested with versions Libtorch1.1, 1.4 and 1.7. 1.7 may conflict with OPENCV during compilation. Libtorch 1.4 is recommended, which can be downloaded here.
- GPU version: we have tested with CUDA 10.1, CUDNN 7.6, Libtorch1.4. The versions are determined according to Libtorch version. You can also check the compatibility of your GPU driver, CUDA and Libtorch online.
Note: after downloading, you should set your own path in the Cmakelists.txt of loop-fusion as follows:
set(Torch_DIR "$your own path$/libtorch/share/cmake/Torch")
find_package(Torch REQUIRED)
message(STATUS "Torch version is: ${Torch_VERSION}")
Clone the repository and catkin_make:
mkdir ~/catkin_ws/src/SP-Loop
cd ~/catkin_ws/src/SP-Loop
git clone https://github.com/yutongwangBIT/SP-Loop.git
cd ../
catkin build
source ~/catkin_ws/devel/setup.bash
(if you fail in this step, try to find another computer with clean system or reinstall Ubuntu and ROS)
Download EuRoC MAV Dataset to YOUR_DATASET_FOLDER. Take MH_01 for example, you can run VINS-Fusion with three sensor types (monocular camera + IMU, stereo cameras + IMU and stereo cameras).
- Change pathes of weights (sp_path and spglue_path) to your own in ~/catkin_ws/src/SP-Loop/config/euroc/euroc_sp_loop.yaml
- Change path of bag_file to your own in ~/catkin_ws/src/SP-Loop/vins_estimator/launch/run_euroc_bag.launch
roslaunch vins_estimator run_euroc_bag.launch
- Change pathes of weights (sp_path and spglue_path) to your own in ~/catkin_ws/src/SP-Loop/config/euroc/euroc_sp_loop.yaml
roslaunch vins_estimator run_euroc_bag_multi.launch
play rosbags in sequence:
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
rosbag play YOUR_DATASET_FOLDER/MH_02_easy.bag
rosbag play YOUR_DATASET_FOLDER/MH_03_medium.bag
rosbag play YOUR_DATASET_FOLDER/MH_04_difficult.bag
rosbag play YOUR_DATASET_FOLDER/MH_05_difficult.bag
For security reason, we will upload our real-world data later.
We use VINS-Fusion as the framework, ceres solver for non-linear optimization and DBoW2 for loop detection, a generic camera model and GeographicLib. The pre-trained weights we adopt are from SuperPoint and SuperGlue.
The source code is released under GPLv3 license.