/habitat-pointnav-aux

Code for "Auxiliary Tasks Speed Up Learning PointGoal Navigation"

Primary LanguagePythonMIT LicenseMIT

Auxiliary Tasks Speed Up Learning PointGoal Navigation

Our code is built off of the Habitat framework and only changes some files. For a framework overview, please refer to Habitat documentation. This code was last tested with this commit.

Requirements

  1. Python 3.6+
  2. Recommended: anaconda/miniconda

Installation

Please first initialize the conda environment using the provided environment.yml file with

conda env create -f environment.yml.

To install Habitat, follow installation guidelines for habitat-sim and habitat-api. No additional packages are used in our paper. This codebase is known to be compatible with https://github.com/facebookresearch/habitat-sim/releases/tag/v0.1.6 (we use headless installation).

Data

We use the Gibson dataset as distributed by Habitat. You'll need to download the Gibson scene data from habitat-sim and the Gibson PointNav data from habitat-api. Download and extract according to the instructions on the Habitat readmes.

Pre-trained weights

NOTICE: Note that the reported results have changed since the initial pre-print in Summer 2020. An early stopped checkpoint for models trained to 40 million frames for the baseline, best single module (reported as "Add" in the paper) (cpca-id-td_single) and multi-module (cpca-id-td_attn-e) checkpoints are posted here. Note that our reported scores were attained by running ./scripts/eval_pn.sh 3 times and averaging the scores. Once your checkpoints are downloaded, modify the path in eval_pn.sh to these paths to get validation metrics. We get the following numbers:

  • baseline: .56 SPL.
  • cpca-id-td_single : .68 SPL.
  • cpca-id-td_attn-e: .71 SPL.

Training

You can train the same models using ./scripts/train_pn.sh "variant_name. The variant configurations are all under habitat_baselines/config/official_gc. The datasets used by these configurations are specified by the task configuration under configs/tasks/pointnav_gc.yaml.

Evaluation

You can evaluate a trained checkpoint by configuring the checkpoint path in ./scripts/eval_pn.sh. To generate videos of your agent, enable VIDEO_OPTION in the variant configuration. You can also find settings to adjust what videos are generated inside ppo_trainer.py.

Model Analysis

To generate a json with more detailed evaluation information (e.g. actions taken, agent location per episode), see run_detailed.py.

References and Citation

If you use this work, you can cite it as

@inproceedings{ye2020auxiliary,
    title={Auxiliary Tasks Speed Up Learning PointGoal Navigation},
    author={Joel Ye and Dhruv Batra and Erik Wijmans and Abhishek Das},
    year={2020},
    booktitle={Proceedings of the Conference on Robot Learning (CoRL)}
}