TixiaoShan/LIO-SAM

Increase Point Cloud Density in SLAM Mapping with unitree GO2 EDU Bot

Bharannn opened this issue · 5 comments

Hi there,

I hope this message finds you well. I've been using the unitree GO2 EDU bot equipped with the HESAI XT-16 LIDAR for SLAM mapping purposes. While performing SLAM and saving the map, I've observed that the point cloud density is lower than expected, even when utilizing the LIDAR at its highest RPM (1200 rpm) as per the documentation here.

I'm reaching out to inquire if there's a way to adjust the point cloud density directly through the SLAM process. I've reviewed the documentation provided but haven't found any specific instructions on how to address this issue.

Would it be possible for you to provide guidance on any configuration or parameter files that might allow me to adjust the point cloud density during SLAM mapping? Alternatively, if there are any other recommended solutions or approaches to increase the density of the point cloud output, I would greatly appreciate your insights.

Thank you for your attention to this matter, and I look forward to your assistance in resolving this issue.

lio-sam only saves the plane points and edge points of the keyframe. If you want to save the full point cloud, you need to implement it yourself: 1. Read the implementation in ImageProjection::projectPointCloud() and save the undistorted points and their timestamps. 2.stich the point clouds in 1 according to the trajectory. lio-sam does not save every frame, you may need to interpolate.

You'd better reassign a thread for file writing operations to avoid affecting real-time performance

I've been checking the same, but it seem like no filters present in image_projection.cpp to downsample the data other than downsampling rate if present.
I feel it should be map_optimisation.cpp where all the registration of the cloud is happening. the points are filtered through leaf size. I reduced the leaf sizes to 0.001 and it gives me warning that data overload might happen.

Leaf size is too small for the input dataset. Integer indices would overflow.[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow
Screenshot from 2024-05-24 10-23-04

I've been checking the same, but it seem like no filters present in image_projection.cpp to downsample the data other than downsampling rate if present. I feel it should be map_optimisation.cpp where all the registration of the cloud is happening. the points are filtered through leaf size. I reduced the leaf sizes to 0.001 and it gives me warning that data overload might happen.

Leaf size is too small for the input dataset. Integer indices would overflow.[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow Screenshot from 2024-05-24 10-23-04

map_optimization downsampling is only used to build a local map of icp and has no effect on the density of the map you need. This warning is because the index of voxel exceeds the maximum value of the int32 type. It is not recommended that you set leaf_size so small, which may cause loam to not obtain a good enough line or plane when performing point-line or point-surface matching.
The map you get from the source code is actually the feature points of the keyframes, so there may be two reasons for the sparse map points:

  • Key frames are at least 1/2 of lidar frames, and may be sparser at slow speeds
  • in feature extraction, points that are neither edge nor plane points are discarded

If you want to obtain a dense point cloud, it is not recommended that you obtain it from the source code. You may need to stitch it yourself based on the trajectory.

Yeah Understood. Well Thanks for the explanation.
I am thinking of recording a PCAP file parallelly for the same time frame as mapping. I am getting a trajectory.txt or .pcd file that I can use further to stitch all the raw frames further through trajectory file.
How can I record a PCAP file for the same in the code itself. or If I record through tcpdump will it work?