/VANP

PyTorch implementation of "Learning Where to See for Navigation: A Self-Supervised Vision-Action Pre-Training Approach"

Primary LanguagePythonOtherNOASSERTION

License: MIT

Mohammad Nazeri, Junzhe Wang, Amirreza Payande, Xuesu Xiao


We present VANP, a Self-Supervised Vision-Action model for visual navigation pre-training approach. Instead of detecting salient objects that are beneficial for tasks such as classification or detection, VANP learns to focus only on specific visual regions that are relevant to the navigation task.

To achieve this, VANP uses a history of visual observations, future actions, and a goal image for self-supervision, and embeds them using two small Transformer Encoders. Then, VANP maximizes the information between the embeddings by using a mutual information maximization objective function.

This repository contains three components: (1) the code to parse bag files from MuSoHu and SCAND, (2) train code for the pretext task, and (3) train and validation code for downstream navigation task using the pre-trained model.

Updates

  • Jul 29, 2024: fix bug in PositionalEncoding
  • Jul 26, 2024: added more augmentation and gradient accumulation for better performance.
  • Jul 26, 2024: code cleanup, and general bug fixes.

Installation

Main libraries:

  • PyTorch: as the main ML framework
  • Comet.ml: tracking code, logging experiments
  • OmegaConf: for managing configuration files

First create a virtual env for the project.

python3 -m venv .venv
source .venv/bin/activate

Then install the latest version of PyTorch from the official site. Finally, run the following:

pip install -r requirements.txt

To set up Comet.Ml follow the official documentations.

Dataset

We used MuSoHu and SCAND. Please follow this guide to download and parse the datasets.

Applying on Your Data

  • If you want to apply VANP on your dataset, please make sure that your data does not contain static sequences (sequences without change between frames) for better results. Please read the limitation section of the paper.
  • Removing the action head is possible, but generally not advised, at least have it for warmup.
  • You can change the hyperparameters in the config file, and the level of augmentations in the dataloader to improve the results.
  • You can add embedding spaces from other models such as Segment Anything and Depth Anything in this section of the code to enrich the embedding space even with more information.

Model Training

To run pretext and downstream training, first edit the config file with proper directory addresses and change of hyperparameters as you see fit, then run:

./run.sh train

Acknowledgements

Thanks for GNM, VICreg, and Barlow papers for making their code public.

Citation

If you find the code helpful, please cite this work:

@article{nazeri2024vanp,
  title={VANP: Learning Where to See for Navigation with Self-Supervised Vision-Action Pre-Training},
  author={Nazeri, Mohammad and Wang, Junzhe and Payandeh, Amirreza and Xiao, Xuesu},
  journal={arXiv preprint arXiv:2403.08109},
  year={2024}
}