/LEGENT

Open Platform for Embodied Agents

Primary LanguagePythonApache License 2.0Apache-2.0

LEGENT

Open Platform for Embodied Agents


Updates

  • [2024/05] A simple web demo is accessible on HuggingFace Space🤗. Let's dive into the immersive interactive world!

Introduction

In the future, robots will perceive the environment as we do, communicate with us through natural language and help us with our tasks. The platform is dedicated to developing robots that can chat, see, and act from virtual worlds to the real world. We aim to facilitate research in this field for anyone interested. LEGENT is a pioneering solution combining large models with embodied agents, prioritizing ease of use and scalability. The platform focuses on developing:

  • An easy-to-use environment that simulates a physical world, where an agent can interact with humans through language, receive egocentric vision, and perform physical actions.

  • Automated generation of training data, including the generation of scenes, tasks, and agent trajectories. The platform is tailored to train large multimodal models as embodied models, using generated data from simulated worlds at scale. LEGENT serves as the data engine for embodied models in robotics and games, as well as for world models.

Demonstration

Interact with the embodied agent in realistic scenes.

robotics.mp4

Interact with the embodied agent in stylized scenes.

game.mp4

Features

  • Language Interaction. Use natural language as the human-robot interaction interface.

  • Fundamental Physics. The simulation incorporates gravity, friction, and collision dynamics.

  • Diverse Rendering. By adjusting assets and rendering features, LEGENT can achieve photorealistic rendering and stylized rendering. Instructions for trying out these scenes can be found here.

    photorealistic.mp4
  • Interactable Objects. Agents and humans can manipulate various 3D objects.

    interactable_objects.mp4
  • Scalable Assets. LEGENT supports importing (1) your own 3D objects, (2) objects from academic datasets, and (3) objects created by generative models. Learn more here.

    assets_generated.mp4
    assets_minecraft.mp4
  • Humanoid Animation. Body movement and nonverbal expression are also important for embodied agents. LEGENT will continue to enhance support in this aspect.

  • Scene Generation. LEGENT integrates advanced scene generation algorithms to support scalable training.

    scene.generation.mp4
  • Trajectory Generation. Automatic generation of training data for training multimodal models into language-grounded embodied models. A minimal example of a trajectory:

    0000 0001 0002 0003
    {
      "id": "20240509-223825-320898",
      "interactions": [
          {
              "from": "human",
              "text": "Where is the orange?"
          },
          {
              "from": "agent",
              "trajectory": [
                  {
                      "image": "20240509-223825-320898/0000.png",
                      "action": "rotate_right(18)"
                  },
                  {
                      "image": "20240509-223825-320898/0001.png",
                      "action": "move_forward(2.0)"
                  },
                  {
                      "image": "20240509-223825-320898/0002.png",
                      "action": "move_forward(1.8), rotate_right(30)"
                  },
                  {
                      "image": "20240509-223825-320898/0003.png",
                      "action": "speak(\"It's on the sofa.\")"
                  }
              ]
          }
      ]
    }
  • User-friendly. LEGENT requires no complex installation and can run cross-platform on both PCs and servers. It is as intuitive as a game while also supporting complex research needs.

Note

LEGENT is currently organizing code and documents and improving existing features. It will be more convenient to use once this process is complete. If you want a more stable version, please stay tuned!