This repo contains a curative list of papers using Large Language/Multi-Modal Models for Robotics/RL. Template from awesome-Implicit-NeRF-Robotics
Please feel free to send me pull requests or email to add papers!
If you find this repository useful, please consider citing and STARing this list. Feel free to share this list with others!
- "Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis", arXiv, Dec 2023. [Paper] [Paper List] [Website]
- "Language-conditioned Learning for Robotic Manipulation: A Survey", arXiv, Dec 2023, [Paper]
- "Foundation Models in Robotics: Applications, Challenges, and the Future", arXiv, Dec 2023, [Paper] [Paper List]
- "Robot Learning in the Era of Foundation Models: A Survey", arXiv, Nov 2023, [Paper]
- "The Development of LLMs for Embodied Navigation", arXiv, Nov 2023, [Paper]
- AutoRT: "Embodied Foundation Models for Large Scale Orchestration of Robotic Agents", arXiv, Jan 2024. [Paper] [Website]
- LEO: "An Embodied Generalist Agent in 3D World", arXiv, Nov 2023. [Paper] [Code] [Website]
- Robogen: "A generative and self-guided robotic agent that endlessly propose and master new skills.", arXiv, Nov 2023. [Paper] [Code] [Website]
- SayPlan: "Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning", Conference on Robot Learning (CoRL), Nov 2023. [Paper] [Website]
- [LLaRP] "Large Language Models as Generalizable Policies for Embodied Tasks", arXiv, Oct 2023. [Paper] [Website]
- [RT-X] "Open X-Embodiment: Robotic Learning Datasets and RT-X Models", arXiv, July 2023. [Paper] [Website]
- [RT-2] "RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control", arXiv, July 2023. [Paper] [Website]
- Instruct2Act: "Mapping Multi-modality Instructions to Robotic Actions with Large Language Model", arXiv, May 2023. [Paper] [Pytorch Code]
- TidyBot: "Personalized Robot Assistance with Large Language Models", arXiv, May 2023. [Paper] [Pytorch Code] [Website]
- PaLM-E: "PaLM-E: An Embodied Multimodal Language Model", arXiv, Mar 2023, [Paper] [Webpage]
- RT-1: "RT-1: Robotics Transformer for Real-World Control at Scale", arXiv, Dec 2022. [Paper] [GitHub] [Website]
- ProgPrompt: "Generating Situated Robot Task Plans using Large Language Models", arXiv, Sept 2022. [Paper] [Github] [Website]
- Code-As-Policies: "Code as Policies: Language Model Programs for Embodied Control", arXiv, Sept 2022. [Paper] [Colab] [Website]
- Say-Can: "Do As I Can, Not As I Say: Grounding Language in Robotic Affordances", arXiv, Apr 2021. [Paper] [Colab] [Website]
- Socratic: "Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language", arXiv, Apr 2021. [Paper] [Pytorch Code] [Website]
- PIGLeT: "PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World", ACL, Jun 2021. [Paper] [Pytorch Code] [Website]
- Matcha: "Chat with the Environment: Interactive Multimodal Perception using Large Language Models", IROS, 2023. [Paper] [Github] [Website]
- Generative Agents: "Generative Agents: Interactive Simulacra of Human Behavior", arXiv, Apr 2023. [Paper Code]
- "Large Language Models as Zero-Shot Human Models for Human-Robot Interaction", arXiv, Mar 2023. [Paper]
- "Translating Natural Language to Planning Goals with Large-Language Models", arXiv, Feb 2023. [Paper]
- "PDDL Planning with Pretrained Large Language Models", NeurlPS, 2022. [Paper] [Github]
- CortexBench "Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?" arXiv, Mar 2023. [Paper]
- SayCanPay: "SayCanPay: Heuristic Planning with Large Language Models Using Learnable Domain Knowledge", AAAI 2024, [Paper] [Code] [Website]
- ViLa: "Look Before You Leap: Unveiling the Power of GPT-4V in Robotic Vision-Language Planning", arXiv, Sep 2023, [Paper] [Website]
- LGMCTS: "LGMCTS: Language-Guided Monte-Carlo Tree Search for Executable Semantic Object Rearrangement", arXiv, Sep 2023. [Paper]
- Prompt2Walk: "Prompt a Robot to Walk with Large Language Models", arXiv, Sep 2023, [Paper] [Website]
- DoReMi: "Grounding Language Model by Detecting and Recovering from Plan-Execution Misalignment", arXiv, July 2023, [Paper] [Website]
- LLM+P:"LLM+P: Empowering Large Language Models with Optimal Planning Proficiency", arXiv, Apr 2023, [Paper] [Code]
- "Foundation Models for Decision Making: Problems, Methods, and Opportunities", arXiv, Mar 2023, [Paper]
- PromptCraft: "ChatGPT for Robotics: Design Principles and Model Abilities", Blog, Feb 2023, [Paper] [Website]
- Text2Motion: "Text2Motion: From Natural Language Instructions to Feasible Plans", arXiV, Mar 2023, [Paper] [Website]
- ChatGPT-Prompts: "ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application", arXiv, Apr 2023, [Paper] [Code/Prompts]
- LM-Nav: "Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action", arXiv, July 2022. [Paper] [Pytorch Code] [Website]
- InnerMonlogue: "Inner Monologue: Embodied Reasoning through Planning with Language Models", arXiv, July 2022. [Paper] [Website]
- Housekeep: "Housekeep: Tidying Virtual Households using Commonsense Reasoning", arXiv, May 2022. [Paper] [Pytorch Code] [Website]
- LID: "Pre-Trained Language Models for Interactive Decision-Making", arXiv, Feb 2022. [Paper] [Pytorch Code] [Website]
- ZSP: "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents", ICML, Jan 2022. [Paper] [Pytorch Code] [Website]
- FILM: "FILM: Following Instructions in Language with Modular Methods", ICLR, 2022. [Paper] [Code] [Website]
- Don't Copy the Teacher: "Don’t Copy the Teacher: Data and Model Challenges in Embodied Dialogue", EMNLP, 2022. [[Paper](Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue)] [Website]
- ReAct: "ReAct: Synergizing Reasoning and Acting in Language Models", ICLR, 2023. [Paper] [Github] [Website]
- LLM-BRAIn: "LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on Large Language Model", arXiv, May 2023. [Paper]
- MOO: "Open-World Object Manipulation using Pre-Trained Vision-Language Models", arXiv, Mar 2022. [Paper] [Website]
- CALM: "Keep CALM and Explore: Language Models for Action Generation in Text-based Games", arXiv, Oct 2020. [Paper] [Pytorch Code]
- "Planning with Large Language Models via Corrective Re-prompting", arXiv, Nov 2022. [Paper]
- "Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions", arXiV, Oct 2020, [Paper]
- LLM-planner: "LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- GD: "Grounded Decoding: Guiding Text Generation with Grounded Models for Robot Control", arXiv, Mar 2023. [Paper] [Website]
- COWP: "Robot Task Planning and Situation Handling in Open Worlds", arXiv, Oct 2022. [Paper] [Pytorch Code] [Website]
- GLAM: "Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning", arXiv, May 2023. [Paper] [Pytorch Code]
- "Reward Design with Language Models", ICML, Feb 2023. [Paper] [Pytorch Code]
- LLM-MCTS: "Large Language Models as Commonsense Knowledge for Large-Scale Task Planning", arXiv, May 2023. [Paper]
- "Collaborating with language models for embodied reasoning", NeurIPS, Feb 2022. [Paper]
- LLM-Brain: "LLM as A Robotic Brain: Unifying Egocentric Memory and Control", arXiv, Apr 2023. [Paper]
- Co-LLM-Agents: "Building Cooperative Embodied Agents Modularly with Large Language Models", arXiv, Jul 2023. [Paper] [Code] [Website]
- LLM-Reward: "Language to Rewards for Robotic Skill Synthesis", arXiv, Jun 2023. [Paper] [Website]
- AlphaBlock: "AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation", arxiv, May 2023. [Paper]
- CoPAL: "Corrective Planning of Robot Actions with Large Language Models", arXiv, Oct 2023. [Paper] [Website][Code]
- Beyond Text: "Beyond Text: Improving LLM's Decision Making for Robot Navigation via Vocal Cues", arxiv, Feb 2024. [Paper]
- Octopus:"Octopus: Embodied Vision-Language Programmer from Environmental Feedback", arXiv, Oct 2023, [Paper] [PyTorch Code] [Website]
- [Text2Reward] "Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning", arXiv, Sep 2023 [Paper] [Website]
- [VoxPoser] "VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models", arXiv, July 2023 [Paper] [Website]
- ProgramPort:"Programmatically Grounded, Compositionally Generalizable Robotic Manipulation", ICLR, Apr 2023, [Paper] [[Website] (https://progport.github.io/)]
- CoTPC:"Chain-of-Thought Predictive Control", arXiv, Apr 2023, [Paper] [Code]
- DIAL:"Robotic Skill Acquistion via Instruction Augmentation with Vision-Language Models", arXiv, Nov 2022, [Paper] [Website]
- CLIP-Fields:"CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory", arXiv, Oct 2022, [Paper] [PyTorch Code] [Website]
- VIMA:"VIMA: General Robot Manipulation with Multimodal Prompts", arXiv, Oct 2022, [Paper] [Pytorch Code] [Website]
- Perceiver-Actor:"A Multi-Task Transformer for Robotic Manipulation", CoRL, Sep 2022. [Paper] [Pytorch Code] [Website]
- LaTTe: "LaTTe: Language Trajectory TransformEr", arXiv, Aug 2022. [Paper] [TensorFlow Code] [Website]
- Robots Enact Malignant Stereotypes: "Robots Enact Malignant Stereotypes", FAccT, Jun 2022. [Paper] [Pytorch Code] [Website] [Washington Post] [Wired] (code access on request)
- ATLA: "Leveraging Language for Accelerated Learning of Tool Manipulation", CoRL, Jun 2022. [Paper]
- ZeST: "Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?", L4DC, Apr 2022. [Paper]
- LSE-NGU: "Semantic Exploration from Language Abstractions and Pretrained Representations", arXiv, Apr 2022. [Paper]
- Embodied-CLIP: "Simple but Effective: CLIP Embeddings for Embodied AI", CVPR, Nov 2021. [Paper] [Pytorch Code]
- CLIPort: "CLIPort: What and Where Pathways for Robotic Manipulation", CoRL, Sept 2021. [Paper] [Pytorch Code] [Website]
- TIP: "Multimodal Procedural Planning via Dual Text-Image Prompting", arXiV, May 2023, [Paper]
- VLaMP: "Pretrained Language Models as Visual Planners for Human Assistance", arXiV, Apr 2023, [Paper]
- R3M:"R3M: A Universal Visual Representation for Robot Manipulation", arXiv, Nov 2022, [Paper] [Pytorch Code] [Website]
- LIV:"LIV: Language-Image Representations and Rewards for Robotic Control", arXiv, Jun 2023, [Paper] [Pytorch Code] [Website]
- LILAC:"No, to the Right – Online Language Corrections for Robotic Manipulation via Shared Autonomy", arXiv, Jan 2023, [Paper] [Pytorch Code]
- NLMap:"Open-vocabulary Queryable Scene Representations for Real World Planning", arXiv, Sep 2022, [Paper] [Website]
- LLM-GROP:"Task and Motion Planning with Large Language Models for Object Rearrangement", arXiv, May 2023. [Paper] [Website]
- "Towards a Unified Agent with Foundation Models", ICLR, 2023. [Paper]
- ELLM:"Guiding Pretraining in Reinforcement Learning with Large Language Models", arXiv, Feb 2023. [Paper]
- "Language Instructed Reinforcement Learning for Human-AI Coordination", arXiv, Jun 2023. [Paper]
- VoxPoser:"VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models", arXiv, Jul 2023. [Paper] [Website]
- DEPS:"Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents", arXiv, Feb 2023. [Paper] [Pytorch Code]
- Plan4MC:"Plan4MC: Skill Reinforcement Learning and Planning for Open-World Minecraft Tasks", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- VOYAGER:"VOYAGER: An Open-Ended Embodied Agent with Large Language Models", arXiv, May 2023. [Paper] [Pytorch Code] [Website]
- Scalingup: "Scaling Up and Distilling Down: Language-Guided Robot Skill Acquisition", arXiv, July 2023. [Paper] [Code] [Website]
- Gato: "A Generalist Agent", TMLR, Nov 2022. [Paper/PDF] [Website]
- RoboCat: "RoboCat: A self-improving robotic agent", arxiv, Jun 2023. [Paper/PDF] [Website]
- PhysObjects: "Physically Grounded Vision-Language Models for Robotic Manipulation", arxiv, Sept 2023. [Paper]
- MetaMorph: "METAMORPH: LEARNING UNIVERSAL CONTROLLERS WITH TRANSFORMERS", arxiv, Mar 2022. [Paper]
- SPRINT: "SPRINT: Semantic Policy Pre-training via Language Instruction Relabeling", arxiv, June 2023. [Paper] [Website]
- BOSS: "Bootstrap Your Own Skills: Learning to Solve New Tasks with LLM Guidance", CoRL, Nov 2023. [Paper] [Website]
- Grasp Anything: "Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots", arxiv, June 2023. [Paper]
- OVSG: "Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs", CoRL, Nov 2023. [Paper] [Code] [Website]
- ADAPT: "ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts", CVPR, May 2022. [Paper]
- "The Unsurprising Effectiveness of Pre-Trained Vision Models for Control", ICML, Mar 2022. [Paper] [Pytorch Code] [Website]
- CoW: "CLIP on Wheels: Zero-Shot Object Navigation as Object Localization and Exploration", arXiv, Mar 2022. [Paper]
- Recurrent VLN-BERT: "A Recurrent Vision-and-Language BERT for Navigation", CVPR, Jun 2021 [Paper] [Pytorch Code]
- VLN-BERT: "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web", ECCV, Apr 2020 [Paper] [Pytorch Code]
- "Interactive Language: Talking to Robots in Real Time", arXiv, Oct 2022 [Paper] [Website]
- VLMaps: "Visual Language Maps for Robot Navigation", arXiv, Mar 2023. [Paper] [Pytorch Code] [Website]
- NLMap:"Open-vocabulary Queryable Scene Representations for Real World Planning", arXiv, Sep 2022, [Paper] [Website]
- OmniGibson: "OmniGibson: a platform for accelerating Embodied AI research built upon NVIDIA's Omniverse engine".6th Annual Conference on Robot Learning, 2022. [Paper] [Code]
- GENESIS: "A generative world for general-purpose robotics & embodied AI learning.", arXiv, Nov 2023. [Code]
- ARNOLD: "ARNOLD: A Benchmark for Language-Grounded Task Learning With Continuous States in Realistic 3D Scenes", ICCV, Apr 2023. [Paper] [Code] [Website]
- MineDojo: "MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge", arXiv, Jun 2022. [Paper] [Code] [Website] [Open Database]
- Habitat 2.0: "Habitat 2.0: Training Home Assistants to Rearrange their Habitat", NeurIPS, Dec 2021. [Paper] [Code] [Website]
- BEHAVIOR: "BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments", CoRL, Nov 2021. [Paper] [Code] [Website]
- iGibson 1.0: "iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes", IROS, Sep 2021. [Paper] [Code] [Website]
- ALFRED: "ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks", CVPR, Jun 2020. [Paper] [Code] [Website]
- BabyAI: "BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning", ICLR, May 2019. [Paper] [Code]
If you find this repository useful, please consider citing this list:
@misc{kira2022llmroboticspaperslist,
title = {Awesome-LLM-Robotics},
author = {Zsolt Kira},
journal = {GitHub repository},
url = {https://github.com/GT-RIPL/Awesome-LLM-Robotics},
year = {2022},
}