Addressing Significant Cumulative Errors with UrbanLoco Dataset
YWL0720 opened this issue · 6 comments
Hi! I have tested your algorithm using Dataset 5: HK-Data20190117 from the UrbanLoco dataset. However, I noticed significant cumulative errors when the vehicle returns to the starting point. Below are the parameters I used. Could you please advise on how to adjust them for better performance? Thank you.
dlio:
frames:
odom: odom
baselink: base_link
lidar: lidar
imu: imu
map:
waitUntilMove: true
dense:
filtered: false
sparse:
frequency: 1.0
leafSize: 0.25
odom:
gravity: 9.80665
imu:
approximateGravity: false
calibration:
gyro: true
accel: true
time: 3
bufferSize: 5000
preprocessing:
cropBoxFilter:
size: 1.0
voxelFilter:
res: 0.25
keyframe:
threshD: 1.0
threshR: 45.0
submap:
useJaccard: true
keyframe:
knn: 10
kcv: 10
kcc: 10
gicp:
minNumPoints: 64
kCorrespondences: 16
maxCorrespondenceDistance: 0.5
maxIterations: 32
transformationEpsilon: 0.01
rotationEpsilon: 0.01
initLambdaFactor: 1e-9
geo:
Kp: 4.5
Kv: 11.25
Kq: 4.0
Kab: 2.25
Kgb: 1.0
abias_max: 5.0
gbias_max: 0.5
Hi @YWL0720 -- yes, these types of datasets with large loops are difficult for odometry-only algorithms. However, there are a few things you can try:
- Try expanding the number of keyframes used for the submap (e.g.,
knn, kcv, kcc
) to 20 or 30 - Set
adaptive: false
indlio.yaml
and setkeyframe: threshD
to10
Let me know if those work.
Thank you for your response. I have made adjustments to the parameters based on your advice and retested the algorithm. Unfortunately, the cumulative error in the xy direction did not significantly decrease. However, it is worth noting that, commendably, DLIO exhibits significantly lower cumulative error in the z-axis compared to similar algorithms such as FAST-LIO, using the same dataset.
Hi! I have another question to ask. In the function buildKeyframesAndSubmap, there is an attempt to transform the point clouds of keyframes into the world coordinate system in order to construct the submap. This process utilizes the homogeneous transformation matrix this->keyframe_transformations[i], which is obtained through point cloud ICP. However, this->state obtained after IMU fusion should theoretically be more accurate than the result of point cloud ICP, and it represents the current pose. I noticed that the code does not use this->state for transforming and constructing the submap. I'm curious to know the reasoning behind this decision. Thank you.
I can write a longer response later, but the short of is it that it's much more stable to use keyframes transformed via GICP than the state after IMU propagation. IMU propagation can be highly unreliable (inaccurate biases, sudden impacts, etc.) and can cause those "fly away" behaviors where error is continually compounded. In contrast, decoupling the two and using GICP to transform keyframes means that (especially with our geometric observer), in the case that IMU propagation is poor, it will always propagate / converge towards the GICP result, meaning that error from IMU integration will not compound. There is some theoretical analysis to be done to actually prove this point (WIP), but that's the tl;dr of it. Good question.
Thank you for your response. DLIO is an excellent work, and I have gained a lot from it.
Happy to hear that! Thanks for your interest in our work :^)