/PacmanDQN

Deep Reinforcement Learning in Pac-man

Primary LanguagePython

PacmanDQN

Deep Reinforcement Learning in Pac-man

Demo

Demo

Example usage

Run a model on smallGrid layout for 6000 episodes, of which 5000 episodes are used for training.

$ python3 pacman.py -p PacmanDQN -n 6000 -x 5000 -l smallGrid

Layouts

Different layouts can be found and created in the layouts directory

Parameters

Parameters can be found in the params dictionary in pacmanDQN_Agents.py.

Models are saved as "checkpoint" files in the /saves directory.
Load and save filenames can be set using the load_file and save_file parameters.

Episodes before training starts: train_start
Size of replay memory batch size: batch_size
Amount of experience tuples in replay memory: mem_size
Discount rate (gamma value): discount
Learning rate: lr

Exploration/Exploitation (ε-greedy):
Epsilon start value: eps
Epsilon final value: eps_final
Number of steps between start and final epsilon value (linear): eps_step

Citation

Please cite this repository if it was useful for your research:

@article{van2016deep,
  title={Deep Reinforcement Learning in Pac-man},
  subtitle={Bachelor Thesis},
  author={van der Ouderaa, Tycho},
  year={2016},
  school={University of Amsterdam},
  type={Bachelor Thesis},
}

Requirements

  • python==3.5.1
  • tensorflow==0.8rc

Acknowledgements

DQN Framework by (made for ATARI / Arcade Learning Environment)

Pac-man implementation by UC Berkeley: