ObVi-SLAM is a joint object-visual SLAM approach aimed at long-term multi-session robot deployments.
[Paper with added appendix] [Video]
Offline execution instructions coming soon. ROS implementation coming late 2023/early 2024.
Please email amanda.adkins4242@gmail.com with any questions!
For information on how to set up and run the comparison algorithms, see our evaluation repo.
TODO
- dockerfile version (recommended)
- native version
TODO
- Explain files needed and their structure (intrinsics, extrinsics, visual features, bounding box (opt), images?,
- Explain how to run given these files
TODO (Taijing, start here)
- Explain how to preprocess rosbag to get the data needed for minimal execution above
TODO
- Explain how to modify configuration file -- which parameters will someone need to modify for different environment, (lower priority): explain each of the parameters in the config file
For our experiments, we used YOLOv5 (based on this repo) with this model.
We used detections with labels 'lamppost', 'treetrunk', 'bench', and 'trashcan' with this configuration file.
Please contact us if you would like to obtain the videos on which we performed the evaluation.
- Add installation instructions
- Add offline execution instructions