NeBula-Autonomy/LAMP

Questions about output of system and the input bag files.

Closed this issue · 1 comments

Hello,

Thank you for this library. I ran this on a docker image of ubuntu 18.04 with ROS melodic, and following the launch file, I made the log directory under lamp_pgo/log and got the output, where it seems result.g2o is the optimized pose graph to compare with ground-truth.

When I evaluate the ground-truth with the evo library, I get strange results. For example, for urban, if I don't do any transformation to align with ground-truth, it looks fairly reasonable:
Urban_traj_no_transform
However there is a significant offset at times in the rpy and xyz values:
Urban_xyz
Urban_rpy
This causes errors to be quite large without alignment. For example, ATE is:

       max	96.665043
      mean	53.747229
    median	58.812987
       min	0.178006
      rmse	59.149760
       sse	2676501.026202
       std	24.696751

If I were to use an SE(3) or SIM(3) transform to align with ground-truth, the results become strange. For SE(3) (below), it becomes significantly shifted, and for SIM(3) it just shrinks it by a factor of over 100 (second below, zoomed in to see the transformed trajectory):
Urban_traj_SE3
Urban_traj_SIM3

The above look strange but they minimize the error. I was wondering if this output is normal? Just for clarity this was the output on rviz so everything looked fine:
LAMP_urban
The same pattern occurs in the other 3 datasets.
As they were g2o files, I also looked into optimizing the graph before comparing to ground-truth but it didn't really change much.

I just had a couple of other questions about the implementation:

  1. Are the output/ground-truth for each of these the combined poses for all robots? e.g. for Tunnel it's Husky3 and 4?
  2. Can this implementation work with a Lidar front-end right now? As I understand, the inputs from this dataset are pose-graphs already provided by the robots that ran the front-ends such as LOCUS and Hovermap, but is it possible to run with these front-ends with the raw sensor data?

Thanks,

Sorry for the late response, has this been resolved?

  1. Hard to tell what exactly is going on with evo, but I would guess that there might be something to do with the timestamps, as you are trying to align multiple robot trajectories with the combined ground truth. In general, the keys follow GTSAM convention where different robots have different char prefixes.
  2. Yes LAMP should work with a lidar frontend. We have tested it with LOCUS and the Emesent Hovermap system. It should be possible to run with raw data.