RLTF is a research framework that provides high-quality implementations of common Reinforcement Learning algorithms. It also allows fast-prototyping and benchmarking of new methods.
Status: This work is under active development (breaking changes might occur).
Coming additions:
- Official release for DQN-IDS and C51-IDS
- MPI support for policy gradients
- Dueling DQN
- Prioritized Experience Replay
- n-step returns
- Rainbow
Implemented models are able to achieve comparable results to the ones reported in the corresponding papers. With tiny exceptions, all implementations should be equivalent to the ones described in the original papers.
Implementations known to misbehave:
- QR-DQN (in progress)
The goal of this framework is to provide stable implementations of standard RL algorithms and simultaneously enable fast prototyping of new methods. Some important features include:
- Exact reimplementation and competitive performance of original papers
- Unified and reusable modules
- Clear hierarchical structure and easy code control
- Efficient GPU utilization and fast training
- Detailed logs of hyperparameters, train and eval scores, git diff, TensorBoard visualizations
- Episode video recordings with plots of network outputs
- Compatible with OpenAI gym, MuJoCo, PyBullet and Roboschool
- Restoring the training process from where it stopped, retraining on a new task, fine-tuning
- Python >= 3.5
- Tensorflow >= 1.6.0
- OpenAI gym >= 0.9.6
- opencv-python (either pip package or OpenCV library with python bindings)
- matplotlib (with TkAgg backend)
- pybullet (optional)
- roboschool (optional)
git clone https://github.com/nikonikolov/rltf.git
pip package coming soon
For brief documentation see docs/.
If you use this repository for you research, please cite:
@misc{baselines,
author = {Nikolay Nikolov},
title = {RLTF: Reinforcement Learning in TensorFlow},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nikonikolov/rltf}},
}