/DipG-Seg

The official implementation of DipG-Seg.

Primary LanguageC++GNU General Public License v3.0GPL-3.0

DipG-Seg

DipG-Seg is a fast and accurate ground segmentation algorithm based on double images including zz-image and dd-image. This method is pixel-wise, which could be regarded as the counterpart of point-wise on the 3D point cloud. Despite this, our method is very efficient, meanwhile, accurate.

1. Features of DipG-Seg

  • A complete framework of ground segmentation totally based on images. Thus, it is very easy to further accelerate by adjusting the resolutions of images.
  • Accurate and Super Fast. DipG-Seg can run at more than 120Hz on an Intel NUC (i7 1165G7) with a resolution of 64×87064\times 870, achieving a high accuracy of over 94% on the SemanticKITTI dataset.
  • Robust to LIDAR models and scnenarios. The given parameters allow DipG-Seg to work well on 64, 32, and 16-beam LiDARs and in scenarios in nuScenes and SemanticKITTI.

animated

2. About this repo

2.1 Hope this repo can help you

If you find our repo helpful for your research, please cite our paper. Thank you!

Author: Hao Wen and Chunhua Liu from EEPT Lab at CityU.

Paper: DipG-Seg: Fast and Accurate Double Image-Based Pixel-Wise Ground Segmentation, Hao Wen, Senyi Liu, Yuxin Liu, and Chunhua Liu, T-ITS, Regular Paper

@ARTICLE{10359455,
  author={Wen, Hao and Liu, Senyi and Liu, Yuxin and Liu, Chunhua},
  journal={IEEE Transactions on Intelligent Transportation Systems}, 
  title={DipG-Seg: Fast and Accurate Double Image-Based Pixel-Wise Ground Segmentation}, 
  year={2023},
  volume={},
  number={},
  pages={1-12},
  keywords={Image segmentation;Fitting;Three-dimensional displays;Point cloud compression;Sensors;Laser radar;Image sensors;Ground segmentation;autonomous driving;mobile robots;LiDAR-based perception},
  doi={10.1109/TITS.2023.3339334}}

Explore more demos in Video

2.2 What in this repo

  • An example of a ros node for validation on your own platform.
  • Visualization demo and evaluation program based on the KITTI dataset.

3. How to Run

3.1 Environment

Ubuntu18.04 + ROS Melodic

ROS can be installed by following this tutorial.

3.2 Prerequisite Packages

C++ packages

  1. OpenCV
  2. PCL

OpenCV can be installed from this link.
PCL can be installed by runing this in the terminal:

sudo apt install libpcl-dev

Python packages

#install python lib -- numpy, pandas -- for python script of evaluation.
pip2 install numpy pandas

3.3 Build

mkdir -p ~/catkin_ws/src/
cd ~/catkin_ws/src/
git clone https://github.com/EEPT-LAB/DipG-Seg.git
cd .. && catkin_make
# remember to source devel/setup.bash before you run the nodes in the dipgseg

Also, you can build with catkin tools:

mkdir -p ~/catkin_ws/src/
cd ~/catkin_ws
catkin init
cd ~/catkin_ws/src/
git clone https://github.com/EEPT-LAB/DipG-Seg.git
cd .. && catkin build dipgseg
# remember to source devel/setup.bash before you run the nodes in the dipgseg

3.4. Dataset

  1. If you want to validate DipG-Seg on SemanticKITTI. Please download it and place it on your ~/your_dataset_path/.
  2. If you want to validate DipG-Seg on nuScenes. Please download it and place it on your ~/your_dataset_path/. NOTE THAT we do not provide the complete program for the evaluation on nuScenes, BUT we provide the projection parameter for nuScenes LIDAR. Remember to modify the projection_parameter when you validate on nuScenes.
  3. You can also validate on your own mobile platform, and please ensure your LiDAR can publish sensor_msgs::PointCloud2 to the topic /pointcloud . Otherwise, you can remap the topic /pointcloud to your_point_cloud_topic_name in the launch file. Last but not least, you should modify the projection_parameter for your LiDAR. Some tool scripts are provided in the scripts folder.

3.5 Let's run

3.5.1 Run on the SemanticKITTI dataset.

  • Modify the parameters dataset_path and the seq_num in the launch file as:
<rosparam param="dataset_path">"/path_to_your_downloaded_dataset/sequences/"</rosparam>
<node pkg="dipgseg" type="offline_kitti_node" name="dipgseg_node" output="screen" args="seq_num">
  • Then, run the following command in the terminal:
roslaunch dipgseg visualization_offline_kitti.launch 

3.5.2 Evaluation on SemanticKITTI dataset.

  • Modify the parameters dataset_path in the launch file as:
<rosparam param="dataset_path">"/path_to_your_downloaded_dataset/sequences/"</rosparam>
  • Then, run this python script in the terminal:
cd ~/catkin_ws/src/DipG-Seg/scripts/
python2 eval_on_kitti.py

When the evaluation is finished, you can find the evaluation results in the result folder.

3.5.3. Run on your own mobile robots or record ros bag file.

  • According to the parameters of your LiDAR, modify the projection_param file. A simple tool script for generating the projection parameters of LIDARs that have even vertical angle resolution will be released. Now, only the parameters for LIDARs of SemanticKITTI and nuScenes are provided.

NOTE: If you want to generate the projection parameters of LIDARs except for the above two datasets, instructions are explained in detail in our paper.

  • Remap the topic /pointcloud to your_point_cloud_topic_name in the launch file if necessary.
<remap from="/pointcloud" to="your_point_cloud_topic_name" />
  • Start your LiDAR sensor node at first, then run the following command in the terminal:
roslaunch dipgseg demo.launch

3.6 Tool scripts

Finished

-[x] eval_on_kitti.py: Evaluation on the SemanticKITTI dataset.

-[x] kitti_ros_publisher.py: Publish the SemanticKITTI dataset as ros bag file.

To be finished

-[ ] projection_param_generator.py: Generate the projection parameters of LIDARs that have even vertical angle resolution.

4. Contact

Maintainer: Hao Wen

Email: hao.wen@my.cityu.edu.hk

Acknowledgement

To achieve a nice code style and good performance, the following works give us a lot of help and inspiration. Thanks to the authors for their great works!

  1. depth_clustering: File loader and image-based segmentation.

  2. Patchwork and Patchwork++ and Ground-Segmentation-Benchmark: Ground segmentation evaluation.