This repository is the official implementation of Is Mapping Necessary for Realistic PointGoal Navigation?.
This project is developed with Python 3.6. The recommended way to set up the environment is by using miniconda or anaconda package/environment management system:
conda create -n pointgoalnav-env python=3.6 cmake=3.14.0 -y
conda activate pointgoalnav-env
IMN-RPG uses Habitat-Sim 0.1.7 (commit 856d4b0) which can be built from source or installed from conda:
conda install -c aihabitat -c conda-forge habitat-sim=0.1.7 headless
Then install Habitat-Lab:
git clone --branch challenge-2021 git@github.com:facebookresearch/habitat-lab.git
cd habitat-lab
# installs both habitat and habitat_baselines
pip install -r requirements.txt
pip install -r habitat_baselines/rl/requirements.txt
pip install -r habitat_baselines/rl/ddppo/requirements.txt
pip install -r habitat_baselines/il/requirements.txt
python setup.py develop --all
Now you can install IMN-RPG:
git clone git@github.com:rpartsey/pointgoal-navigation.git
cd pointgoal-navigation
python -m pip install -r requirements.txt
Download Gibson and Matterport3D scenes by following the instructions in Habitat-Lab's Data section.
Download Gibson PointGoal navigation episodes corresponding to Sim2LoCoBot experiment configuration (row 2 in Task dataset table; aka. Gibson-v2 episodes). Matterport3D episodes following the same configuration didn't exist and were generated in scope of IMN-RPG research. Train/val episodes can be downloaded here.
After data downloading create a symlink to data/
directory in the IMN-RPG project root:
cd pointgoal-navigation
ln -s <path-to-data-directory> data
Visual odometry dataset is collected by sampling pairs of RGB-D observations (and additional information, navigate to
trajectory-sampling
see generate_trajectory_dataset_par.py
) from agent rollout trajectories.
Before running generate_trajectory_dataset_par.py
add the project root directory to the PYTHONPATH:
export PYTHONPATH="<path-to-pointgoal-navigation-directory>:${PYTHONPATH}"
and create a symlink to data/
directory:
ln -s <path-to-data-directory> <path-to-pointgoal-navigation-directory>/trajectory-sampling/data/
To generate training dataset, run:
python generate_trajectory_dataset_par.py \
--agent-type spf \
--data-dir data/vo_datasets/hc_2021 \
--config-file ../config_files/shortest_path_follower/shortest_path_follower.yaml \
--base-task-config-file ../config_files/challenge_pointnav2021.local.rgbd.yaml \
--dataset gibson \
--split train \
--num-episodes-per-scene 2000 \
--pts-frac-per-episode 0.2 \
--gpu-ids 0 1 \
--num-processes-per-gpu 10
The above command was used to generate a training dataset (disk space: 592.3 GB, dataset length: 1627439). Reported in the paper 0.5M and 1.5M datasets were uniformly sampled from generated dataset.
To generate validation dataset, run:
python generate_trajectory_dataset_par.py \
--agent-type spf \
--data-dir data/vo_datasets/hc_2021 \
--config-file ../config_files/shortest_path_follower/shortest_path_follower.yaml \
--base-task-config-file ../config_files/challenge_pointnav2021.local.rgbd.yaml \
--dataset gibson \
--split val \
--num-episodes-per-scene 71 \
--pts-frac-per-episode 0.75 \
--gpu-ids 0 1 \
--num-processes-per-gpu 10
The above command was used to generate a validation dataset (disk space: 16.2 GB, dataset length: 44379).
We use policy training pipeline from habitat_baselines.
See navigation/experiments/experiment_launcher.sh
Experiment configuration parameters are set in the yaml file. See config_files/odometry/paper/*
.
To train the visual odometry model, run:
python train_odometry_v2.py --config-file <path-to-config-file>
For multiple GPUs/nodes you may use torch.distributed.launch:
python -u -m torch.distributed.launch --use_env --nproc_per_node=2 train_odometry_v2.py --config-file <path-to-config-file>
or Slurm (see odometry/experiments/run_experiment.*
files).
To benchmark the agent (navigation policy + visual odometry), run:
export CHALLENGE_CONFIG_FILE=config_files/challenge_pointnav2021.local.rgbd.yaml
python agent.py \
--agent-type PPOAgentV2 \
--input-type depth \
--evaluation local \
--ddppo-checkpoint-path <path-to-policy-checkpoint> \
--ddppo-config-path config_files/ddppo/ddppo_pointnav_2021.yaml \
--vo-config-path <path-to-vo-config> \
--vo-checkpoint-path <path-to-vo-checkpoint> \
--pth-gpu-id 0 \
--rotation-regularization-on \
--vertical-flip-on
Checkpoints may be downloaded from Google Drive manually or by using gdown.
Training scenes | Terminal reward | Download | Task setting |
---|---|---|---|
Gibson 4+ | 2.5 Success | Link | Habitat Challenge 2021 |
Gibson 0+ | 2.5 SPL | Link | Habitat Challenge 2021 |
HM3D-MP3D-Gibson 0+ | 2.5 SPL | Link | Sim2real |
To see the policy training config, download the checkpoint and execute command below:
import torch
checkpoint = torch.load('path-to-the-policy-checkpoint')
print(checkpoint['config'])
Dataset Size(M) | VO | Embedding | Train time | Epoch | Download | |||||
---|---|---|---|---|---|---|---|---|---|---|
Encoder | Size(M) | 1FC | 2FC | Flip | Swap | |||||
1 | 0.5 | ResNet18 | 4.2 | 50 | Link | Gibson | ||||
2 | 0.5 | ResNet18 | 4.2 | ✔ | 43 | Link | ||||
3 | 0.5 | ResNet18 | 4.2 | ✔ | ✔ | 44 | Link | |||
4 | 0.5 | ResNet18 | 4.2 | ✔ | ✔ | ✔ | 48 | Link | ||
5 | 0.5 | ResNet18 | 4.2 | ✔ | ✔ | ✔ | 50 | Link | ||
6 | 0.5 | ResNet18 | 4.2 | ✔ | ✔ | ✔ | ✔ | 50 | Link | |
7 | 1.5 | ResNet18 | 4.2 | ✔ | ✔ | 48 | Link | |||
8 | 1.5 | ResNet18 | 4.2 | ✔ | ✔ | ✔ | ✔ | 50 | Link | |
9 | 1.5 | ResNet50 | 7.6 | ✔ | ✔ | ✔ | ✔ | 48 | Link | |
10 | 5 | ResNet50 | 7.6 | ✔ | ✔ | ✔ | ✔ | 64 | Link | |
11 | ? | ResNet50 | 7.6 | ✔ | ✔ | ✔ | ✔ | 32 | Link | MP3D fine-tuned |
12 | ? | ResNet50 | 7.6 | ✔ | ✔ | ✔ | ✔ | 56 | Link | Sim2real |
We improve Realistic PointNav agent navigation performance from 64% Success / 52% SPL to 96% Success and 77% SPL on the Gibson val split, and achieve the following performance on:
Habitat Challenge 2021 benchmark test-standard split (retrieved 2021-Nov-16).
Rank | Participant team | SPL | SoftSPL | Distance to goal | Success |
---|---|---|---|---|---|
1 | VO for Realistic PointGoal (Ours) | 0.74 | 0.76 | 0.21 | 0.94 |
2 | inspir.ai robotics | 0.70 | 0.71 | 0.70 | 0.91 |
3 | VO2021 | 0.59 | 0.69 | 0.53 | 0.78 |
4 | Differentiable SLAM-net | 0.47 | 0.60 | 1.74 | 0.65 |
We have deployed our agent (with no sim2real adaptation) onto a LoCoBot. It achieves 11%Success, 71%SoftSPL, and makes it 92% of the way to the goal (SoftSuccess). See 3rd-person videos and mapped routes on our website.
If you use IMN-RPG in your research, please cite our paper:
@InProceedings{Partsey_2022_CVPR,
author = {Partsey, Ruslan and Wijmans, Erik and Yokoyama, Naoki and Dobosevych, Oles and Batra, Dhruv and Maksymets, Oleksandr},
title = {Is Mapping Necessary for Realistic PointGoal Navigation?},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {17232-17241}
}