This repository contains submission guidelines and starter code for the MultiON Challenge 2021. For challenge overview, check challenge webpage. To participate, visit EvalAI challenge page.
In MultiON, an agent is tasked with navigating to a sequence of objects inserted into a realistic 3D environment. The challenge uses AI Habitat for simulation and uses scenes from Matterport3D dataset. The target objects are randomly sampled from a set of 8 cylinders with identical shapes but different colors.
In each episode, the agent is initialized at a random starting position and orientation in an unseen environment and provided a list of target objects randomly sampled (without replacement) from the set of 8 objects. The agent must navigate to each object in the list, in order, and call a FOUND action to indicate its discovery. The agent has access to an RGB-D camera and a (noiseless) GPS+Compass sensor. GPS+Compass sensor provides the agent's current location and orientation information relative to the start of the episode. The episode terminates when an agent finds all the objects in the current episode or when it calls an incorrect FOUND action or if the agent exhausts its given time budget.
We use Matterport3D scenes for the challenge. We follow the standard train/val/test split as recommended by Anderson et al. Each episode contains three sequential targets. For the challenge, we focus on the task of 3-ON or 3 object navigation.
We extend the evaluation protocol of ObjectNav. We use two metrics to evaluate agent performance:
Progress: Fraction of object goals that are successfully FOUND. This effectively measures if the agent was able to navigate to goals.
PPL: Overall path length weighted by progress. This effectively measures the path efficiency of the agent. Formally,
To participate in the challenge, visit our EvalAI page. Participants need to upload docker containers with their agents using EvalAI. Before making your submission, you should run your container locally on the mini-val data split to ensure the performance metrics match with those of remote evaluation. We provide a base docker image and participants only need to edit evaluate.py
file which implements the navigation agent. Instructions for building your docker container are provided below.
-
Install nvidia-docker v2 by following instructions given here.
-
Clone this repository:
git clone https://github.com/saimwani/multion-challenge.git
cd multion-challenge
-
Edit
evaluate.py
to implement your agent. Currently, it uses an agent taking random actions. -
Make changes in the the provided Dockerfile if your agent has additional dependencies. They should be installed inside a conda environment named
habitat
that already exists in our docker. -
Build the docker container (this may need
sudo
priviliges):
docker build -t multi_on .
-
Download Matterport3D scenes for Habitat here and place the data in:
multion-challenge/data/scene_datasets/mp3d
. Minival dataset is already contained inmultion-challenge/data/3_ON_minival
. -
Test the docker container locally.
docker run -v multion-challenge/data:/multion-chal-starter/data --runtime=nvidia multi_on:latest
You should see an output like this:
2021-02-05 11:28:19,591 Initializing dataset MultiNav-v1
2021-02-05 11:28:19,592 initializing sim Sim-v0
2021-02-05 11:28:25,368 Initializing task MultiNav-v1
Progress: 0.0
PPL: 0.0
Success: 0.0
SPL: 0.0
- Install EvalAI and submit your docker image. See detailed instructions here.
# Install EvalAI Command Line Interface
pip install "evalai>=1.3.5"
# Set EvalAI account token
evalai set_token <your EvalAI participant token>
# Push docker image to EvalAI docker registry
evalai push multi_on:latest --phase <phase-name>
If you use the multiON framework, please consider citing the following paper:
@inproceedings{wani2020multion,
title = {Multi-ON: Benchmarking Semantic Map Memory using Multi-Object Navigation},
author = {Saim Wani and Shivansh Patel and Unnat Jain and Angel X. Chang and Manolis Savva},
booktitle = {Neural Information Processing Systems (NeurIPS)},
year = {2020},
}
We thank the habitat team for building the habitat framework. We also thank EvalAI team who helped us host the challenge. This work would not be possible without the Matterport3D dataset.