/depth_clustering

Fast and robust clustering of point clouds generated with a Velodyne sensor.

Primary LanguageC++OtherNOASSERTION

Depth Clustering

Build Status Codacy Badge Coverage Status

This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones.

Check out a video that shows all objects which have a bounding box of less than 10 squared meters: Segmentation illustration

How to build?

Prerequisites

  • Catkin.
  • OpenCV: sudo apt-get install libopencv-dev
  • QGLViewer: sudo apt-get install libqglviewer-dev
  • Qt (4 or 5 depending on system):
    • Ubuntu 14.04: sudo apt-get install libqt4-dev
    • Ubuntu 16.04: sudo apt-get install libqt5-dev
  • (optional) PCL - needed for saving clouds to disk
  • (optional) ROS - needed for subscribing to topics

Build script

This is a catkin package. So we assume that the code is in a catkin workspace and CMake knows about the existence of Catkin. Then you can build it from the project folder:

  • mkdir build
  • cd build
  • cmake ..
  • make -j4
  • (optional) ctest -VV

It can also be built with catkin_tools if the code is inside catkin workspace:

  • catkin build depth_clustering

P.S. in case you don't use catkin build you should. Install it by sudo pip install catkin_tools.

How to run?

See examples. There are ROS nodes as well as standalone binaries. Examples include showing axis oriented bounding boxes around found objects (these start with show_objects_ prefix) as well as a node to save all segments to disk. The examples should be easy to tweak for your needs.

Run on real world data

Go to folder with binaries:

cd <path_to_project>/build/devel/lib/depth_clustering

Frank Moosman Velodyne SLAM - Dataset

Get the data:

mkdir data/; wget http://www.mrt.kit.edu/z/publ/download/velodyneslam/data/scenario1.zip -O data/moosman.zip; unzip data/moosman.zip -d data/; rm data/moosman.zip

Run a binary to show detected objects:

./show_objects_moosman --path data/scenario1/

Other data

There are also examples on how to run the processing on KITTI data and on ROS input. Follow the --help output of each of the examples for more details.

Documentation

You should be able to get Doxygen documentation by running:

cd doc/
doxygen Doxyfile.conf

Related publications

Please cite related paper if you use this code:

@InProceedings{bogoslavskyi16iros,
Title     = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation},
Author    = {I. Bogoslavskyi and C. Stachniss},
Booktitle = {Proc. of The International Conference on Intelligent Robots and Systems (IROS)},
Year      = {2016},
Url       = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf}
}