/transfer-learning-across-platformer-games

Performance comparison of various machine learning generated ai which play Super Mario Kart for the SNES. Group project for ITCS 6156, Fall 2019.

Primary LanguagePython

Learning to play games using OpenAI gym-retro with NEAT

NEAT (NeuroEvolution of Augmenting Topologies) is a method developed by Kenneth O. Stanley for evolving arbitrary neural networks.

For further information regarding general concepts and theory, please see Selected Publications on Stanley's website

Training

The training files can run with default arguments

If you adjust the downsampling or play a different game, you will need to adjust "num_inputs" in your config file

Single-threaded

This will train using a single cpu core as well as render the OpenAI environment and downsampled network input.

For help:

python3 train.py -h

Format:

python3 train.py -c <checkpoint-filepath> -d <downscale rate> -g <game-env> -e <number-of-generations> -r <record (bool)> -s <state>

Multi-threaded

This will train using p number of threads. This will not do any rendering. For help:

python3 train-parallel.py -h

Format:

python3 train-parallel.py -c <checkpoint-filepath> -d <downscale rate> -g <game-env> -e <number-of-generations> -p <number of threads> -r <record (bool)> -s <state>
Example Commands:

python3 train-parallel.py -d 16 -g SuperMarioWorld-Snes -r ./replays -p 12 -s YoshiIsland1

Playback

This will playback the winning genome (or any genome saved in the pickle format)

python3 playback.py -d <downscale rate> -g <game> -p <path-to-winner.pkl> -s <state>

Additional References

The NEAT paper

retro-gym

neat-python

Medium blog post

TODO:

  • get size of input programmatically
  • read parameter of "-r" to specify replay directory