/garage

A toolkit for reproducible reinforcement learning research

Primary LanguagePythonMIT LicenseMIT

Docs Build Status License codecov

garage

garage is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of algorithms.

garage is fully compatible with OpenAI Gym. All garage environments implement gym.Env, so all garage components can also be used with any environment implementing gym.Env.

garage only officially supports Python 3.5+.

garage comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

garage supports TensorFlow for neural network frameworks. TensorFlow modules can be found under garage/tf.

Documentation

Documentation is available online at https://garage.readthedocs.org/en/latest/.

Citing garage

If you use garage for academic research, you are highly encouraged to cite the following paper on the original rllab implementation:

Credits

garage is based on a predecessor project called rllab. The garage project is grateful for the contributions of the original rllab authors, and hopes to continue advancing the state of reproducibility in RL research in the same spirit.

rllab was originally developed by Rocky Duan (UC Berkeley/OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley/OpenAI), John Schulman (UC Berkeley/OpenAI), and Pieter Abbeel (UC Berkeley/OpenAI).