This repo implements MARL algorithms for networked system control, with observability and communication of each agent limited to its neighborhood. For fair comparison, all algorithms are applied to A2C agents, and they are classified into two groups. IA2C contains non-communicative policies which utilize neighborhood information only, whereas MA2C contains communicative policies with certain communication protocols.
Available IA2C algorithms:
- PolicyInferring: Lowe, Ryan, et al. "Multi-agent actor-critic for mixed cooperative-competitive environments." Advances in Neural Information Processing Systems, 2017.
- FingerPrint: Foerster, Jakob, et al. "Stabilising experience replay for deep multi-agent reinforcement learning." arXiv preprint arXiv:1702.08887, 2017.
- ConsensusUpdate: Zhang, Kaiqing, et al. "Fully decentralized multi-agent reinforcement learning with networked agents." arXiv preprint arXiv:1802.08757, 2018.
Available MA2C algorithms:
- DIAL: Foerster, Jakob, et al. "Learning to communicate with deep multi-agent reinforcement learning." Advances in Neural Information Processing Systems. 2016.
- CommNet: Sukhbaatar, Sainbayar, et al. "Learning multiagent communication with backpropagation." Advances in Neural Information Processing Systems, 2016.
- NeurComm
Available NMARL scenarios:
- ATSC Grid: Adaptive traffic signal control in a synthetic traffic grid.
- ATSC Monaco: Adaptive traffic signal control in a real-world traffic network from Monaco city.
- CACC Catch-up: Cooperative adaptive cruise control for catching up the leadinig vehicle.
- CACC Slow-down: Cooperative adaptive cruise control for following the leading vehicle to slow down.
- Python3
- Tensorflow
- SUMO