/tf2multiagentrl

Clean implementation of Multi-Agent Reinforcement Learning methods (MADDPG, MATD3, MASAC, MAD4PG) in TensorFlow 2.x

Primary LanguagePython

Build Status Codacy Badge Codacy Badge

TensorFlow 2 Implementation of Multi-Agent Reinforcement Learning Approaches

This repository contains a modular TF2 implementations of multi-agent versions of the RL methods DDPG (MADDPG), TD3 (MATD3), SAC (MASAC) and D4PG (MAD4PG). It also implements prioritized experience replay.

In our experiments we found MATD3 to work the best and did not see find a benefit by using Soft-Actor-Critic or the distributional D4PG. However, it is possible that these methods may be benefitial in more complex environments, while our evaluation here focussed on the multiagent-particle-envs by openai.

Code Structure

We provide the code for the agents in tf2marl/agents and a finished training loop with logging powered by sacred in train.py.

We denote lists of variables corresponding to each agent with the suffix _n, i.e. state_n contains a list of n state batches, one for each agent.

Useage

Use python >= 3.6 and install the requirement with

pip install -r requirements.txt

Start an experiment with

python train.py

As we use sacred for configuration management and logging, the configuration can be updated with their CLI, i.e.

python train.py with scenario_name='simple_spread' num_units=128 num_episodes=10000

and experiments are automatically logged to results/sacred/, or optionally also to a MongoDB. To observe this database we recommend to use Omniboard.

Acknowledgement

The environments in /tf2marl/multiagent are from multiagent-particle-envs by openai with the exception of inversion.py and maximizeA2.py, which I added for debugging purposes.

The implementation of the segment tree used for prioritized replay is based on stable-baselines