/awesome-multi-modal-reinforcement-learning

A curated list of Multi-Modal Reinforcement Learning resources (continually updated)

Apache License 2.0Apache-2.0

Awesome Multi-Modal Reinforcement Learning

This is a collection of research papers for Multi-Modal reinforcement learning (MMRL). And the repository will be continuously updated to track the frontier of MMRL. Some papers may not be relevant to RL, but we include them anyway as they may be useful for the research of MMRL.

Welcome to follow and star!

Introduction

Multi-Modal RL agents focus on learning from video (images), language (text), or both, as humans do. We believe that it is important for intelligent agents to learn directly from images or text, since such data can be easily obtained from the Internet.

飞书20220922-161353

Papers

format:
- [title](paper link) [links]
  - authors.
  - key words.
  - experiment environment.

Contributing

Our purpose is to make this repo even better. If you are interested in contributing, please refer to HERE for instructions in contribution.

License

Awesome Multi-Modal Reinforcement Learning is released under the Apache 2.0 license.