TF-Agents makes designing, implementing and testing new RL algorithms easier, by providing well tested modular components that can be modified and extended. It enables fast code iteration, with good test integration and benchmarking.
To get started, we recommend checking out one of our Colab tutorials. If you need an intro to RL (or a quick recap), start here. Otherwise, check out our DQN tutorial to get an agent up and running in the Cartpole environment.
NOTE: Current TF-Agents pre-release is under active development and interfaces may change at any time. Feel free to provide feedback and comments.
Agents
Tutorials
Multi-Armed Bandits
Examples
Installation
Contributing
Releases
Principles
Citation
Disclaimer
In TF-Agents, the core elements of RL algorithms are implemented as Agents
.
An agent encompasses two main responsibilities: defining a Policy to interact
with the Environment, and how to learn/train that Policy from collected
experience.
Currently the following algorithms are available under TF-Agents:
- DQN: Human level control through deep reinforcement learning Mnih et al., 2015
- DDQN: Deep Reinforcement Learning with Double Q-learning Hasselt et al., 2015
- DDPG: Continuous control with deep reinforcement learning Lillicrap et al., 2015
- TD3: Addressing Function Approximation Error in Actor-Critic Methods Fujimoto et al., 2018
- REINFORCE: Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning Williams, 1992
- PPO: Proximal Policy Optimization Algorithms Schulman et al., 2017
- SAC: Soft Actor Critic Haarnoja et al., 2018
See docs/tutorials/
for tutorials on the major components
provided.
The TF-Agents library contains also a Multi-Armed Bandits suite with a few
environments and agents. RL agents can also be used on Bandit environments. For
a tutorial, see
tf_agents/bandits/colabs/bandits_tutorial.ipynb
.
For examples ready to run, see
tf_agents/bandits/agents/examples/
.
End-to-end examples training agents can be found under each agent directory. e.g.:
TF-Agents publishes nightly and stable builds. For a list of releases read the Releases section. The commands below cover installing TF-Agents stable and nightly from pypi.org as well as from a GitHub clone.
Run the commands below to install the most recent stable release (0.3.0), which was tested with TensorFlow 1.15.0 and 2.0.0 as well as Python 2 and 3.
pip install --user tf-agents
pip install --user tensorflow==2.0.0
# Or For TensorFlow 1.x
pip install --user tensorflow==1.15.0
# To get the matching examples and colabs
git clone https://github.com/tensorflow/agents.git
cd agents
git checkout v0.3.0
Note: TF-Agents 0.3.0 is not compatible with TensorFlow 2.1.0 unless the nightly
release of TensorFlow Probability
is installed: pip install tfp-nightly
Nightly builds include newer features, but may be less stable than the versioned
releases. The nightly build is pushed as tf-agents-nightly
. We suggest
installing nightly versions of TensorFlow (tf-nightly
) and TensorFlow
Probability (tfp-nightly
) as those are the version TF-Agents nightly are
tested against. Nightly releases are only compatible with Python 3 as of
17-JAN-2020.
To install the nightly build version, run the following:
# Installing with the `--upgrade` flag ensures you'll get the latest version.
pip install --user --upgrade tf-agents-nightly # depends on tf-nightly
# `--force-reinstall helps guarantee the right version.
pip install --user --force-reinstall tf-nightly
pip install --user --force-reinstall tfp-nightly
After cloning the repository, the dependencies can be installed by running
pip install -e .[tests]
. TensorFlow needs to be installed independently:
pip install --user tf-nightly
.
We're eager to collaborate with you! See CONTRIBUTING.md
for a guide on how to contribute. This project adheres to TensorFlow's
code of conduct. By participating, you are expected to
uphold this code.
TF Agents does both stable and nightly releases. The nightly releases often are fine but can have issues to to upstream libraries being in flux. The table below lists the stable releases of TF Agents to help users that may be locked into a specific version of TensorFlow or other related supporting. TensorFlow version are the versions of TensorFlow tested with the build, other version might work but were not tested. Nightly releases are only compatible with Python 3. 0.3.0 was the last release compatible with Python 2.
Release | Branch / Tag | TensorFlow Version |
---|---|---|
Nightly | master | tf-nightly |
0.3.0 | v0.3.0 | 1.15.0 and 2.0.0 |
Examples of installing nightly, most recent stable, and a specific version of TF-Agents:
# Stable
pip install tf-agents
# Nightly
pip install tf-agents-nightly
# Specific version
pip install tf-agents==0.3.0
This project adheres to Google's AI principles. By participating, using or contributing to this project you are expected to adhere to these principles.
If you use this code please cite it as:
@misc{TFAgents,
title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},
author = "{Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez,
Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Neal Wu,
Efi Kokiopoulou, Luciano Sbaiz, Jamie Smith, Gábor Bartók, Jesse Berent,
Chris Harris, Vincent Vanhoucke, Eugene Brevdo}",
howpublished = {\url{https://github.com/tensorflow/agents}},
url = "https://github.com/tensorflow/agents",
year = 2018,
note = "[Online; accessed 25-June-2019]"
}
This is not an official Google product.