#STEPS to RUN
Checkout our paper at: https://ieeexplore.ieee.org/document/9837164
- Create a sample of images whose ground-truth poses and ground-truth corners of the semantics present in them are available
- Place the ground-truth poses inside the SPTAM folder. You can find a sample file present in the SPTAM folder
- Update the config file for dataset paths and characteristic images
- Go to the object_detection folder; there, you will find two directories, object_detector (to train over permanent objects) and alphabet_detector (to train over temporarily placed alphabet placards).
- Go to object_detector and run train.py to train the object detection model
- Go to alphabet_detector and run train.py to train the alphabet detection model
- Sampled out images from the images with the ground-truth poses for creating characteristic images
- Assign a sample of images from the dataset the labels from their corresponding characteristic image.
- Inside Place_recognition, run generate_semantic_enhanced_images.py to generate semantically enhanced images.
- Run training_with_semantics.py inside the Place_recognition to train the place recognition model
- Run final_pipeline.py inside Place_recognition to generate a json file
- Run run.py inside corner_detection to generate object corners in two json files (one each for left and right images).
- To install SPTAM, refer to and follow the steps from: https://github.com/uoip/stereo_ptam
- Run place_image.py inside SPTAM
- Run sptam.py --path=/path/to/dataset to generate final_positions.txt (inside output_files directory) as the final results.