This is the code for implementing the SchedNet algorithm presented in the paper which will appear ICLR 2019: "Learning to Schedule Communication in Multi-agent Reinforcement Learning".
In MARL (multi-agent reinforcement learning), well-coordinated actions among the agents are crucial to achieve the target goal better at these tasks. One way to accelerate the coordination effect is to enable multiple agents to communicate with each other in a distributed manner and behave as a group. In this paper, we study a practical scenario when (i) the communication bandwidth is limited and (ii) the agents share the communication medium so that only a restricted number of agents are able to simultaneously use the medium, as in the state-of-the-art wireless networking standards. This calls for a certain form of communication scheduling. In that regard, we propose a multi-agent deep reinforcement learning framework, called SchedNet, in which agents learn how to schedule themselves, how to encode the messages, and how to select actions based on received messages. SchedNet is capable of deciding which agents should be entitled to broadcasting their (encoded) messages, by learning the importance of each agent’s partially observed information.
- Actor: Collection of n per-agent individual actor blocks (i.e., WG, ENC, AS)
- Scheduler: Map from weights w to schedule profile c
- Critic: Estimates the action value function of actor
- Each block is fully connected neural network
- Distributed Execution
- Determine the scheduling weight
- k agents are scheduled by WSA
- Scheduled agents to be broadcast message to all agents
- Select an action based on observation and received messages
git clone https://github.com/rhoowd/sched_net.git
cd sched_net
python main.py
- n agents try to capture a randomly moving prey
- Observation: Position of themselves, relative positions of prey (heterogeneous observation range)
- Action: Move up/down/leaf/right
- Reward: Get reward when they capture the prey Performance metric: Number of steps taken to capture the prey
- Setup
- Train the models until convergence
- Evaluate models by averaging metrics for 1,000 iterations
- Communication improve the performance: SchedNET and DIAL outperform IDQN and COMA
- Consider the scheduling from training: Sched-Top(1) outperforms DIAL(1) which is trained without considering scheduling
- Intelligent scheduling: Sched-Top(1) improves the performance by 43% compared to Round Robin