MIT-SPARK/Hydra

Regarding VIO

Closed this issue · 2 comments

Thanks for your great work, as hydra takes kimera-vio as odometry input. I am currently working on monocular camera odometry inputs and want to use other odometry input like vins-fusion, orb-slam3. What are the information does it required for the odometry to make hydra run. Thank you

Hi, thanks for your interest in our work! Hydra only requires the pose (which it looks up via TF2) from an odometry source. In addition, Hydra can either be configured to subscribe to a semantically labeled pointcloud (so depth + semantics and optionally color) as an input or rgb, depth and semantic label images directly (instead of the labeled pointcloud). Feel free to take a look at the launch files in Hydra-Ros for more details.

As a further note, it might be helpful to point out that if you're using monocular VIO as odometry, your pose will have scale drift and you won't be able to get metric depth out of the camera. While I'm aware of at least one case where this was done, it's not really supported in Hydra (most crucially, our backend does not optimize for scale drift) and I don't really plan on supporting it. Good luck on your attempt!

Thank you for your valuable reply. It helps me a lot i manage to ran monocular VIO and it produce good results. Do you have any suggestion for loop closue instead of using kimera-vio lcd that could produce this https://github.com/MIT-SPARK/pose_graph_tools/blob/feature/hydra/msg/BowQuery.msg to do loop closure