ntu-aris/ntu_viral_dataset

The quality of LiDARs extrinsics.

Closed this issue · 4 comments

LiDAR extrinsics do not precisely match scans from two lidars

First of all, very grateful for your generous dataset and contributions to the community.

I test the extrinsics of two lidars through the following steps (correct me if some of the steps are wrong):

    1. obtain scans from two lidars, respectively. let's say Horizontal_scan and Vertical_scan.
    1. ensure that two scans share the same time domain.
    1. transform two scans to the body system of coordinate. i.e., Horizontal_scan = T_body2horizontalLidar * Horizontal_scan; and Vertical_scan = T_body2verticalLidar * Vertical_scan.
    1. visualize Horizontal_scan and Vertical_scan.

In visualization, I found that the extrinscs provided can match two scans, but not precisely (especially apparently for points on the ground), which might be hazardous to the odometry and mapping work.

I put two sample data for your reference (kindly download the zip file and examine it). One is from the eee03 sequence and the other is from nya01. I am still doubt that it may be because of my misoperation. Just in case, if the extrinsics are indeed not good, it is strongly recommended to have a re-calibration.

Finally, many thanks again for your efforts!

eee03_000014_ground_mismatch

nya01_000300_ground_mismatch

sample_data.zip

Hi
First of all, your method seems correct to me.

There are some possibility that I can think of.
First is the sensor intrinsic error.
According to the report from Maria Jokela *, Matti Kutila and Pasi Pyykönen, "Testing and Validation of Automotive Point-Cloud Sensors in Adverse Weather Conditions", the sensor reading does have none-gaussian distributed sensing variance at different sensing distance. as shown in figure below. They were testing on the version 1 of the lidar firmware. We were using the version2 of the firmware to make the data compact( version 1 has larger pointcloud field bug). But V2 firmware seems to have larger variance. For this possibility, we cant solve much. The only way is to feedback to lidar manufacturer to solve it.
image

The second issue is the calibration error which built on top of the first issue
We tried multiple way to calibrate this sensor such as 4 hole method, VICON, chessboard and trihedron pattern. But the sensor reading at close range is noisy and could introduce a global error as shown in figures below. This noises are coupled with pattern reflection reading, lidar angle of sensing etc.. For this possibility, we are working with the ouster side and trying to find a better way to calibrate
image
image

The last possibility is the structure weakness. The drone need to carry the payload of all the sensor, therefore the structure has to be light by using carbon fibers. There could have slight bending between front and back sensor as the center because of the long, thin carbon fiber tube. This case is likely to happen when sudden throttle increase/cut is applied. We can try to propose a live calibration method to address this issue. But it will take some time to address this issue.

image

Hi pengxin,

Thanks for using and helping us troubleshooting the dataset.

We think there can be a bit of error in the extrinsics indeed due to bending of the rig.

When I added a few cm to the z of the IMU-Vertical Lidar the two pointclouds seems to align better

extrinsicTf:   [-1.0,  0.0,  0.0, -0.55,
                 0.0,  0.0,  1.0,  0.03,
                 0.0,  1.0,  0.0,  0.09,
                 0.0,  0.0,  0.0,  1.0 ]

Screenshot from 2021-05-06 18-00-25

Screenshot from 2021-05-06 18-00-51

Maybe some changes in the roll angle could align them even better. We will update the datasets with better calibration in the future. Nevertheless I think this error is not very significant since we've done some experiments with MLOAM (you can find the settings at https://github.com/brytsknguyen/M-LOAM) and our VIRAL SLAM and MILIOM methods with the default parameters and they still work out quite well.

Hi
First of all, your method seems correct to me.

There are some possibility that I can think of.
First is the sensor intrinsic error.
According to the report from Maria Jokela *, Matti Kutila and Pasi Pyykönen, "Testing and Validation of Automotive Point-Cloud Sensors in Adverse Weather Conditions", the sensor reading does have none-gaussian distributed sensing variance at different sensing distance. as shown in figure below. They were testing on the version 1 of the lidar firmware. We were using the version2 of the firmware to make the data compact( version 1 has larger pointcloud field bug). But V2 firmware seems to have larger variance. For this possibility, we cant solve much. The only way is to feedback to lidar manufacturer to solve it.
image

The second issue is the calibration error which built on top of the first issue
We tried multiple way to calibrate this sensor such as 4 hole method, VICON, chessboard and trihedron pattern. But the sensor reading at close range is noisy and could introduce a global error as shown in figures below. This noises are coupled with pattern reflection reading, lidar angle of sensing etc.. For this possibility, we are working with the ouster side and trying to find a better way to calibrate
image
image

The last possibility is the structure weakness. The drone need to carry the payload of all the sensor, therefore the structure has to be light by using carbon fibers. There could have slight bending between front and back sensor as the center because of the long, thin carbon fiber tube. This case is likely to happen when sudden throttle increase/cut is applied. We can try to propose a live calibration method to address this issue. But it will take some time to address this issue.

image

Hi, Shenghai,

Thanks for your reply. May I give two humble suggestions?

  • I noticed that your chessboard paper sheet plumps up a little bit somewhere. It might be better to use a chessboard with flat hard surface.
  • OR. As you mentioned, the lidar readings are noisy at close range. In light of this, a large calibration room should be helpful. I also recommend the plane-to-plane registration-based calibration method, in which case plane surfaces at relatively far range can be used as constraints. This is based on the assumption that far control points can constrain a system better than their close counterparts.

Regards,
Pengxin

Hi pengxin,

Thanks for using and helping us troubleshooting the dataset.

We think there can be a bit of error in the extrinsics indeed due to bending of the rig.

When I added a few cm to the z of the IMU-Vertical Lidar the two pointclouds seems to align better

extrinsicTf:   [-1.0,  0.0,  0.0, -0.55,
                 0.0,  0.0,  1.0,  0.03,
                 0.0,  1.0,  0.0,  0.09,
                 0.0,  0.0,  0.0,  1.0 ]

Screenshot from 2021-05-06 18-00-25

Screenshot from 2021-05-06 18-00-51

Maybe some changes in the roll angle could align them even better. We will update the datasets with better calibration in the future. Nevertheless I think this error is not very significant since we've done some experiments with MLOAM (you can find the settings at https://github.com/brytsknguyen/M-LOAM) and our VIRAL SLAM and MILIOM methods with the default parameters and they still work out quite well.

Hi, Nguyen,

Thanks for your reply. I tested in a SLAM framework using both the provided extrinsics in the dataset and the new vertical lidar extrinsic which you just suggested with a few centimeters downward shift. The results of both are good and similar under the viral_eval metrics. Hence, I agree with you that such small calibration error is of very minor significance.

Regards,
Pengxin