/event-driven-rllab

Extending rllab to event-driven multiagent environments

Primary LanguagePythonOtherNOASSERTION

Docs Circle CI License Join the chat at https://gitter.im/rllab/rllab

Event-Driven rllab

event-driven rllab is a framework that builds on rllab. The central use-case for this framework is multi-agent settings in which agent's actions are temporally-extended and asynchrnous. This package is currently under development, and tutorial on how to use it will be uploaded soon. If you would like to use it in the mean time, please contact kmenda (at) stanford (dot) edu

rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:

rllab is fully compatible with OpenAI Gym. See here for instructions and examples.

rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.

rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.

Documentation

Documentation is available online: https://rllab.readthedocs.org/en/latest/.

Citing rllab

If you use rllab for academic research, you are highly encouraged to cite the following paper:

Credits

rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.

Slides

Slides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016_benchmarking_slides.pdf?dl=0