/embodied-clip

Official codebase for EmbCLIP

Primary LanguagePythonApache License 2.0Apache-2.0

Embodied CLIP

Official repository for Simple but Effective: CLIP Embeddings for Embodied AI

[Paper] [Short Video]

We present competitive performance on navigation-heavy tasks in Embodied AI using frozen visual representations from CLIP.

This repository includes all code and pretrained models necessary to replicate the experiments in our paper. We have included forks of other repositories as branches, as we find this is a convenient way to centralize our experiments and track changes.

Notice

The EmbodiedCLIP implementations for RoboTHOR and Rearrangement have been upstreamed into allenai/allenact. If you encounter any bugs related to THOR, please open issues there instead.

If you have general problems related to Habitat, you can refer to the original Habitat repo. If you have issues related to the EmbodiedCLIP implementation in Habitat, you can reach out to us at allenai/allenact for support.

Experiments

Please see the following links with detailed instructions on how to replicate each experiment:

EmbCLIP Teaser

Citation

@inproceedings{khandelwal2022:embodied-clip,
   author    = {Khandelwal, Apoorv and Weihs, Luca and Mottaghi, Roozbeh and Kembhavi, Aniruddha},
   title     = {Simple but Effective: CLIP Embeddings for Embodied AI},
   booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
   month     = {June},
   year      = {2022}
}