/Regym

Primary LanguagePythonMIT LicenseMIT

Regym: Deep (Multi-Agent) Reinforcement learning Framework

Framework to carry out both Single-Agent and Multi-Agent Reinforcement Learning experiments. This framework has been in constant development since December 2018, and will continue to evolve to add new features and algorithms for many more years!

Features

  • PyTorch implementation of: DQN,Double DQN,Double Dueling DQN,A2C,REINFORCE,PPO...
  • Every implementation is compatible with OpenAI gym and Unity environments.
  • Self-Play training scheme for Multi-Agent environments, as introduced here.
  • Emphasis on cross-compatibility and clear interfaces to add new algorithms. See Adding a new algorithm.
  • Test suite to test and benchmark each algorithm on: compatibility on Discrete / Continuous observation / action spaces. Proof of learning, proof of reproducability.
  • Distributed training.

Documentation

All relevant documentation can be found in the docs. Refer to source code for more specific documentation.

Installation

Using pip

This project has not yet been uploaded to PyPi. This will change soon!

Installing from source

Firstly, clone this repository:

git clone https://github.com/Near32/Regym

Secondly, install it locally using the -e flag in the pip install command:

cd Regym/
pip install -e .

Dependencies

Python dependencies are listed in the file setup.py. This package enforces Python version 3.6 or higher.

If you would like Python 2.7 or other Python versions <3.6 to work, feel free to open an issue.

License

Read License

Papers

List of papers that used this framework.