ORB-SLAM2 Authors: Raul Mur-Artal, Juan D. Tardos, J. M. M. Montiel and Dorian Galvez-Lopez (DBoW2)
This is the ROS implementation of the ORB-SLAM2 real-time SLAM library for Monocular, Stereo and RGB-D cameras that computes the camera trajectory and a sparse 3D reconstruction (in the stereo and RGB-D case with true scale). It is able to detect loops and relocalize the camera in real time. This implementation removes the Pangolin dependency, and the original viewer. For vizualization you can use RViz. The original implementation can be found here.
[Monocular] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF.
[Stereo and RGB-D] Raúl Mur-Artal and Juan D. Tardós. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017. PDF.
[DBoW2 Place Recognizer] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012. PDF
ORB-SLAM2 is released under a GPLv3 license. For a list of all code/library dependencies (and associated licenses), please see Dependencies.md.
For a closed-source version of ORB-SLAM2 for commercial purposes, please contact the authors: orbslam (at) unizar (dot) es.
If you use ORB-SLAM2 (Monocular) in an academic work, please cite:
@article{murTRO2015,
title={{ORB-SLAM}: a Versatile and Accurate Monocular {SLAM} System},
author={Mur-Artal, Ra\'ul, Montiel, J. M. M. and Tard\'os, Juan D.},
journal={IEEE Transactions on Robotics},
volume={31},
number={5},
pages={1147--1163},
doi = {10.1109/TRO.2015.2463671},
year={2015}
}
if you use ORB-SLAM2 (Stereo or RGB-D) in an academic work, please cite:
@article{murORB2,
title={{ORB-SLAM2}: an Open-Source {SLAM} System for Monocular, Stereo and {RGB-D} Cameras},
author={Mur-Artal, Ra\'ul and Tard\'os, Juan D.},
journal={IEEE Transactions on Robotics},
volume={33},
number={5},
pages={1255--1262},
doi = {10.1109/TRO.2017.2705103},
year={2017}
}
We have tested the library in Ubuntu 16.04 with ROS Kinetic. A powerful computer (e.g. i7) will ensure real-time performance and provide more stable and accurate results.
We use the new thread and chrono functionalities of C++11.
We use OpenCV to manipulate images and features. Dowload and install instructions can be found at: http://opencv.org. Required at leat 2.4.3. Tested with OpenCV 2.4.11 and OpenCV 3.2.
Required by g2o (see below). Download and install instructions can be found at: http://eigen.tuxfamily.org. Required at least 3.1.0.
We use modified versions of the DBoW2 library to perform place recognition and g2o library to perform non-linear optimizations. Both modified libraries (which are BSD) are included in the Thirdparty folder.
This ROS node requires catkin_make_isolated or catkin build to build.
Clone the repository into your catkin workspace:
git clone https://github.com/aaide/ORB_SLAM2_ROS.git
To build the node run
catkin build
in your catkin folder.
To run the algorithm expects both a vocabulary file (see the paper) and a config file with the camera- and some hyper parameters. The vocab file ships with this repository, together with config files for the Intel RealSense r200 camera. If you want to use any other camera you need to adjust the file (you can use one of the provided ones as a template). They are at orb_slam2/config.
In the launch files which can be found at ros/launch there are different parameters which can be adjusted:
- publish_pointcloud: Bool. If the pointcloud containing all key points (the map) should be published.
- localize_only: Bool. Toggle from/to only localization. The SLAM will then no longer add no new points to the map.
- reset_map: Bool. Set to true to erase the map and start new. After reset the parameter will automatically update back to false.
- pointcloud_frame_id: String. The Frame id of the Pointcloud/map.
- camera_frame_id: String. The Frame id of the camera position.
- min_num_kf_in_map: Int. Number of key frames a map has to have to not get reset after tracking is lost.
The following topics are being published and subscribed to by the nodes:
- All nodes publish (given the settings) a PointCloud2 containing all key points of the map.
- Live image from the camera containing the currently found key points and a status text.
- A tf from the pointcloud frame id to the camera frame id (the position). The mono node subscribes to
- /camera/image_raw for the input image The RGBD node subscribes to
- /camera/rgb/image_raw the RGB image and
- /camera/depth_registered/image_raw for the depths.
After sourcing your setup bash using
source devel/setup.bash
you can run the the corresponding nodes with one of the following commands:
roslaunch orb_slam2_ros orb_slam2_rgbd.launch
roslaunch orb_slam2_ros orb_slam2_mono.launch