This is a pytorch implementation of the multi-agent reinforcement learning algorithms, including QMIX, VDN, COMA, QTRAN(both QTRAN-base and QTRAN-alt), CommNet, DyMA-CL, and G2ANet, which are the state of art MARL algorithms. In addition, because CommNet and G2ANet need a external training algorithm, you can combine them with COMA, we also provide Central-V and REINFORCE for them to training. We trained these algorithms on SMAC, the decentralised micromanagement scenario of StarCraft II.
- QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning
- Value-Decomposition Networks For Cooperative Multi-Agent Learning
- Counterfactual Multi-Agent Policy Gradients
- QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning
- Learning Multiagent Communication with Backpropagation
- From Few to More: Large-scale Dynamic Multiagent Curriculum Learning
- Multi-Agent Game Abstraction via Graph Attention Neural Network
- Add CUDA option
- DyMA-CL
- G2ANet
- Other SOTA MARL algorithms
$ python main.py --map=3m --alg=qmix
Directly run the main.py
, then the algorithm will start training on map 3m
. Note CommNet and G2ANet need a external training algorithm, so the name of the algorithm is like reinforce+commnet
or central_v+g2anet
, all the algorithms we provide are written on ./common/arguments.py
.
The running of DyMA-CL is independent from others beacuse it requires different environment settings, you should open it as a new project, for more details, please read DyMA-CL documentation.
We independently train these algorithms for 8 times and take the mean of the 8 independent results. In order to make the curves smoother, we also take the mean of every five points in the horizontal direction. In each independent training process, we train these algorithms for 5000 epochs and evaluate them for every 5 epochs. Furthermore, as show in figure 2, we compare the best result we think in the 8 independent results. All of the results are saved in ./result
.