/StarCraft

Implementations of QMIX, VDN, COMA, QTRAN, CommNet, DyMA-CL, and G2ANet on SMAC, the decentralised micromanagement scenario of StarCraft II

Primary LanguagePython

StarCraft

This is a pytorch implementation of the multi-agent reinforcement learning algorithms, including QMIX, VDN, COMA, QTRAN(both QTRAN-base and QTRAN-alt), CommNet, DyMA-CL, and G2ANet, which are the state of art MARL algorithms. In addition, because CommNet and G2ANet need a external training algorithm, you can combine them with COMA, we also provide Central-V and REINFORCE for them to training. We trained these algorithms on SMAC, the decentralised micromanagement scenario of StarCraft II.

Corresponding Papers

Requirements

Acknowledgement

TODO List

  • Add CUDA option
  • DyMA-CL
  • G2ANet
  • Other SOTA MARL algorithms

Quick Start

$ python main.py --map=3m --alg=qmix

Directly run the main.py, then the algorithm will start training on map 3m. Note CommNet and G2ANet need a external training algorithm, so the name of the algorithm is like reinforce+commnet or central_v+g2anet, all the algorithms we provide are written on ./common/arguments.py.

The running of DyMA-CL is independent from others beacuse it requires different environment settings, you should open it as a new project, for more details, please read DyMA-CL documentation.

Result

We independently train these algorithms for 8 times and take the mean of the 8 independent results. In order to make the curves smoother, we also take the mean of every five points in the horizontal direction. In each independent training process, we train these algorithms for 5000 epochs and evaluate them for every 5 epochs. Furthermore, as show in figure 2, we compare the best result we think in the 8 independent results. All of the results are saved in ./result.

1. Mean Win Rate of 8 Independent Runs on 3m

2. Best Result in 8 Independent Runs on 3m