MISTLab/Swarm-SLAM

After Odometry can run, PGO still can't run normally

Closed this issue · 9 comments

@lajoiepy Now rtabmap odometry works fine. But it seems that there is still a problem. The odometry stops after running, and there is no follow-up response. There is nothing using cslam_visualization. Set the 'enable_logs' variable to true in the yaml file, and no data is saved locally.
image
image
image

Now use the following topics: rgb_topic, depth_topic, camera_info_topic.
image

But I saw your document that the topic of cslam is left_image_topic, right_image_topic, left_camera_info_topic, right_camera_info_topic, etc.
image
So I wonder if this will be a problem.

Hi!

  • To double-check the odometry, I suggest that you visualize it in rviz. Launch rviz2, click on Add and select your odometry by topic, then make sure to set the correct fixedframe under Gloval Options. You then should be able to visualize the odometry output and figure out if it is correct.

  • left and right topics are for stereo cameras. Look here for config examples for stereo, rgbd, or lidar

  • Finally, make sure that all the topics are well connected, i.e. that Swarm-SLAM subscribes to the correct topics. rqt_graph should help you to figure this out.

@lajoiepy
Our data set is a synthetic data set, only the rgbd of each frame and the true value 6DoF pose corresponding to each frame, no lidar, no imu, and no gps data.
Now running our RGBD dataset with Swram-SLAM seems to be working fine.
But our purpose is to calculate the ATE error between the estimated pose and the true pose.

I have read your reply in [How does the estimated pose correspond to the GT pose
#14](#14)

But our data does not have gps GT data, only 6DoF GT data. We want to use the 6DoF GT data and the pose estimated by Swarm-SLAM to calculate the ATE error.
So how can I figure it out?

And there is another question:
My data set has 2 robots, but in the results folder, only the robot numbered 0 contains initial_global_pose_graph.g2o and optimized_global_pose_graph.g2o files, and the robot numbered 1 does not have these two files, but there are log.csv and pose_timestamps1.csv two empty files.
It seems that the result of the Swarm-SLAM estimated pose I need is in the optimized_global_pose_graph.g2o files file, but how do I find the correspondence between the pose in this file and the true pose?

Image of rviz visualization
image

The files in the results folder corresponding to the robot numbered 0
image
image

The files in the results folder corresponding to the robot numbered 1
image

It's great to see your progress!
In the optimized pose graph file, the long number after VERTEX_SE3:QUAT is a gtsam::LabeledSymbol::key corresponding to a pose of a robot. You can see here how to retrieve the robot and pose ids for each key: https://github.com/lajoiepy/cslam/blob/1a7b73245460b990288daaae200fe30fcdfbdd8e/src/back_end/gtsam_utils.cpp#L44

Then, as you pointed out, the evaluation currently works only with GPS positions, so you will need to figure out how to associate each of your 6DOF GT poses with the estimated poses (probably using the timestamps).

Once you know which GT poses corresponds to each optimized poses, I suggest you use evo to compute the accuracy metrics: https://github.com/MichaelGrupp/evo

@lajoiepy
Regarding the optimized pose graph file, you can see your gtsam::writeG2ofrom the code here, because you are using the GTSAM::writeg2o function which writes the file to local, but the long number after VERTEX_SE3:QUAT doesn't seem to have anything to do with timestamps.

In addition, according to the code writeG2o here, the latter number is of Symbol type instead of the LabeledSymbol type you mentioned.

This caused me a lot of trouble aligning the estimated pose with the GT pose.
Could you please give me more advice? Thx!

I should have successfully run the Swarm-SLAM method on the RGBD dataset and calculated the associated ATE with EVO.
For a 2-room scene

Visualization results of rviz of Swarm-SLAM:
apart_1_1
apart_1_2

Visual and Quantitative Results of One Robot by EVO
evo_ape tum gps_robot_1.txt robot1_estimated.txt --plot --plot_mode xyz --align_origin

APE w.r.t. translation part (m)
(with origin alignment)

       max	0.169399
      mean	0.041709
    median	0.038966
       min	0.000000
      rmse	0.046227
       sse	0.237200
       std	0.019932

0_align_orign

Visual and Quantitative Results of another Robot by EVO

evo_ape tum gps_robot_1.txt  robot1_estimated.txt   --plot --plot_mode xyz --align_origin
APE w.r.t. translation part (m)
(with origin alignment)

       max	0.216387
      mean	0.052737
    median	0.043886
       min	0.000000
      rmse	0.065038
       sse	0.731785
       std	0.038063

1_align_orign

Thank you for your patience and reply~Thank you again!

Awesome! :D

@TwiceMao Hello, TwiceMao! Excuse me for disturbing you again. Could you please show the way how you visualize your point cloud as well as trajectories shown above? Since I have absolutely no idea how to do that using Swarm-SLAM. Thank you again!

Hello @TwiceMao, I'm in the same boat as you, where I have ground-truth data as timestamped 6DoF poses with no GPS data. How did you manage to evaluate the estimated poses from g2o with evo? As mentioned in this issue, there's no unix timestamp associated with g2o files and we need some correspondence between ground-truth and estimated poses to align with Umeyama's method.