/mtrl

Multi Task RL Baselines

Primary LanguagePythonMIT LicenseMIT

CircleCI License: MIT Python 3.6+ Code style: black Zulip Chat

MTRL

Multi Task RL Algorithms

Contents

  1. Introduction

  2. Setup

  3. Usage

  4. Documentation

  5. Contributing to MTRL

  6. Community

  7. Acknowledgements

Introduction

MTRL is a library of multi-task reinforcement learning algorithms. It has two main components:

Together, these two components enable use of MTRL across different environments and setups.

List of publications & submissions using MTRL (please create a pull request to add the missing entries):

License

Citing MTRL

If you use MTRL in your research, please use the following BibTeX entry:

@Misc{Sodhani2021MTRL,
  author =       {Shagun Sodhani and Amy Zhang},
  title =        {MTRL - Multi Task RL Algorithms},
  howpublished = {Github},
  year =         {2021},
  url =          {https://github.com/facebookresearch/mtrl}
}

Setup

  • Clone the repository: git clone git@github.com:facebookresearch/mtrl.git.

  • Install dependencies: pip install -r requirements/dev.txt

Usage

  • MTRL supports 8 different multi-task RL algorithms as described here.

  • MTRL supports multi-task environments using MTEnv. These environments include MetaWorld and multi-task variants of DMControl Suite

  • Refer the tutorial to get started with MTRL.

Documentation

https://mtrl.readthedocs.io

Contributing to MTRL

There are several ways to contribute to MTRL.

  1. Use MTRL in your research.

  2. Contribute a new algorithm. We currently support 8 multi-task RL algorithms and are looking forward to adding more environments.

  3. Check out the good-first-issues on GitHub and contribute to fixing those issues.

  4. Check out additional details here.

Community

Ask questions in the chat or github issues:

Acknowledgements

  • Our implementation of SAC is inspired by Denis Yarats' implementation of SAC.
  • Project file pre-commit, mypy config, towncrier config, circleci etc are based on same files from Hydra.