/rgbdtam

Primary LanguageC++GNU General Public License v3.0GPL-3.0

#RGBDTAM:

RGBDTAM is a RGBD SLAM algorithm that estimates a dense reconstruction of a scene in real-time on a CPU using monocular or RGB-D cameras. We are currently cleaning the code and working on the efficiency of the algorithm, it should be ready in the following days/weeks.

Related Publication: [1] Alejo Concha, Javier Civera. RGBDTAM: A cost-effective and accurate RGBD Tracking and Mapping System. Submitted to RA-L with IROS presentation.

Video of the results that you should expect in the example sequences: Soon

#License

RGBDTAM is licensed under the GNU General Public License Version 3 (GPLv3), please see http://www.gnu.org/licenses/gpl.html.

For commercial purposes, please contact the authors.

#Disclaimer

This site and the code provided here are under active development. Even though we try to only release working high quality code, this version might still contain some issues. Please use it with caution.

#Dependencies

ROS:

We have tested RGBDTAM in Ubuntu 14.04 with ROS Indigo.

To install ROS (indigo) use the following command:

 sudo apt-get install ros-indigo-desktop

Or check the following link if you have any issue:

http://wiki.ros.org/indigo/Installation/Ubuntu

PCL library for visualization:

 sudo apt-get install ros-indigo-pcl-ros

BOOST library to launch the different threads:

 sudo apt-get install libboost-all-dev 

Vocabulary used for loop closure and relocalization:

 We have used the vocabulary created by ORB-SLAM authors. Please, download the vocabulary from this link "www.github.com/raulmur/ORBvoc.txt.tar.gz" and place it in "ThirdParty/DBoW2/build/ORBvoc.txt"

#Installation

 git clone  https://github.com/alejocb/rgbdtam.git

#Compilation

 catkin_make --pkg rgbdtam

Third Party: SUPERPIXELS COMPILATION

Code used -> Efficient Graph-Based Image Segmentation. P. Felzenszwalb, D. Huttenlocher. International Journal of Computer Vision, Vol. 59, No. 2, September 2004

cd root/catkin_workspace/src/rgbdtam/ThirdParty/segment
make

#Usage

Launch rgbdtam from your 'catkin_workspace' folder:

cd root/catkin_workspace 
rosrun rgbdtam rgbdtam

Notice that the location of rgbdtam should be the following:

root/catkin_workspace/src/rgbdtam

Launch the visualizer of the current frame

rosrun image_view image_view image:=/rgbdtam/camera/image

Launch the visualizer of the map

rosrun rviz rviz

We are working on an automatic visualizer, but for now, check the following screenshot to set up the rviz visualizer:

https://www.dropbox.com/s/pymufqi2i2aixys/visualization_rviz.png?oref=e&n=314995776

You can use a sequence from the TUM dataset to test the algorithm:

rosbag play sequence.bag

There are two parameters that you have to modify (before executing a sequence) in rgbdtam/src/data.yml:

1-) Intrinsic parameters of the camera:

'cameraMatrix'

'distCoeffs'

2-) Camera topic

camera_path:"/image_raw"

Update the the 'camera_path', 'cameraMatrix' and 'distCoeffs' in the file rgbdtam/src/data.yml

Parameters

There are a few tuneable parameters that you can modify in rgbdtam/src/data.yml:

1-) Superpixel calculation

calculate_superpixels: [bool] If 1 it will calculate 3D superpixels.

2-) Number of frames for mapping

num_cameras_mapping_th: [int]. Number of frames that you want to use to estimate the depth maps. Default: 9.

3-) Minimum parallax required for mapping

translational_ratio_th_min: [double]. Minimum parallax to insert a keyframe. Default: 0.075. Typical values [0.03-0.15].

4-) Degenerated cases in 3D superpixel matching

limit_ratio_sing_val: [double]. This threshold deals with the degenerated cases in 3D superpixel calculation. Smaller values -> less outliers. Default: 100. Typical values [10-1000].

5-) Minimum normalized residual threshold required.

limit_normalized_residual: [double]. This threshold accounts for the minimum error required in superpixel calculation. Smaller values -> less outliers. Default: 0.30. Typical values [0.05-0.50].

6-) Minimum number of matches of 3D superpixels in multiple views to achieve multiview consistency.

matchings_active_search: [int]. Number of matches required of the 3D superpixel in multiple views. Larger values -> less outliers. Default: 3. Typical values [0-4].

7-) Kinect Initialization: 1 kinect_initialization: [bool] If 1 it will use the kinect for initialization.

8-) kinect_initialization: 1 minim_points_converged: 66

Contact

If you have any issue compiling/running rgbdtam or you would like to know anything about the code, please contact the authors:

 Alejo Concha -> aconchabelenguer@gmail.com

 Javier Civera -> jcivera@unizar.es