/deep_q_rl

Theano-based implementation of Deep Q-learning

Primary LanguagePythonBSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Introduction

This package provides a Theano-based implementation of the deep Q-learning algorithm described in:

Playing Atari with Deep Reinforcement Learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller

The neural network code is largely borrowed from Sander Dieleman's solution for the Galaxy Zoo Kaggle challenge.

Here is a video showing a trained network playing breakout:

http://youtu.be/SZ88F82KLX4

Dependencies

The script dep_script.sh can be used to install all dependencies under Ubuntu.

Running

Use the script ale_run.py to start all the necessary processes:

$ python ale_run.py --exp_pref data

This will store output files in a folder prefixed with data in the current directory. Pickled version of the network objects are stored after every epoch. The file results.csv will contain the testing output. You can plot the progress by executing plot_results.py:

$ python plot_results.py data_09-29-15-46_0p0001_0p9/results.csv

After a couple of days, you can watch the trained network play using the ale_run_watch.py script:

$ python ale_run_watch.py data_09-29-15-46_0p0001_0p9/network_file_99.pkl

Getting Help

The deep Q-learning web-forum can be used for discussion and advice related to deep Q-learning in general and this package in particular.

See Also