/sc2aibot

Implementing reinforcement-learning algorithms for pysc2 -environment

Primary LanguagePython

  

  

Info

This project implements the FullyConv reinforcement learning agent for pysc2 as specified in https://deepmind.com/documents/110/sc2le.pdf.

It's possible to use

  • A2C, which is a synchronous version of A3C used in the deepmind paper
  • PPO (Proximal Policy Optimization)

Differences to the deepmind spec:

  • Use A2C or PPO instead of A3C
  • The non-spatial feature-vector is discarded here. (Probably because of this can't learn CollectMineralsAndGas)
  • There are some other minor simplifaction to the observation space
  • Use different hyper-parameters
  • For select rectangle we draw a rectangle of radius 5px around selected point (uncertain how deepmind does this)
  • Here we are not trying to learn any other function arguments except the spatial one
  • And maybe others that I don't know of

Results

Map Avg score A2C Avg score PPO Deepmind avg
MoveToBeacon 25 26 26
CollectMineralShards 91 100 103
DefeatZerglingsAndBanelings 48 ? 62
FindAndDefeatZerglings 42 45 45
DefeatRoaches 70-90 ? 100

Training graphs A2C:

Training graphs PPO:

  • Used the default parameters seen in the repo except:
    • DefeatRoaches, DefeatZerglinsAndBanelings entropy_weights 1e-4/1e-4, n_steps_per_batch 5
    • Number of envs 32 or 64
  • Deepmind scores from the FullyConv policy in the release paper are shown for comparison.
  • The model here wasn't able to learn CollectMineralsAndGas or BuildMarines

In DefeatRoaches and DefeatZerglingsAndBanelings the result is not stable. It took something like 5 runs the get the score for DefeatRoaches reported here. Also the scores for those are still considerably worse than Deepmind scores. Might be that at least hyperparameters here are off (and possibly other things).

Other environments seem more stable.

The training was done using one core of Tesla K80 -GPU per environment.

With PPO the scores were slightly better than A2C for tested environments. However, the training time was much longer with PPO than with A2C. Maybe some other PPO-parameters would give faster training time. With PPO the training seems more stable. The typical sigmoid shape in A2C-learning cureves doesn't appear.

Note:

The training is not deterministic and the training time might vary even if nothing is changed. For example, I tried to train MoveToBeacon 5 times with default parameters and 64 environments. Here are the episode numbers when agent achieved score of 27 first time

4674
3079
2355
1231
6358

How to run

python run_agent.py --map_name MoveToBeacon --model_name my_beacon_model --n_envs 32

This will save

  • tf summaries to _files/summaries/my_beacon_model/
  • model to _files/models/my_beacon_model

relative to the project path. By default using A2C. To run PPO specify --agent_mode ppo.

See run_agent.py for more arguments.

Requirements

  • Python 3 (will NOT work with python 2)
  • pysc2 (tested with v1.2)
  • Tensorflow (tested with 1.4.0)
  • Other standard python packages like numpy etc.

Code is tested with OS X and Linux. About Windows don't know. Let me know if there are issues.

References

I have borrowed some ideas from https://github.com/xhujoy/pysc2-agents (FullyConv-network etc.) and Open AI's baselines (A2C and PPO) but the implementation here is different from those. For parallel environments using the code from baselines adapted for sc2.