/ms.pacman.ai

Trained a reinforcement learning agent to play the Atari 2600 game of Ms. Pac-Man. Built a web-app to live stream gameplay with TCP/IP in real-time with Flask as the app interface.

Primary LanguageJupyter NotebookMIT LicenseMIT

ms.pacman.ai - Group3

George Washington University, Cloud Computing - DATS6450, Spring 2022

pacman

See Live Demo Here!

Project Description

Train an AI agent to play Ms. Pacman from Atari 2600.

mspacman_env

Table of Contents

  1. Team Members
  2. How to Run
  3. Folder Structure
  4. Background and Related Works
  5. Architecture
  6. Results
  7. Presentation
  8. References
  9. Licensing

Team Members

How to Run

See web_app README.

Folder Structure

  1. assets: assets of web-app
  2. GPU script: for training model on GPU machine
  3. logs: logs of web-app
  4. model_building: testing for initial model building
  5. stream_test: testing for streaming service during gameplay
  6. stream_test_react: testing for connecting streaming to react front end
  7. web_app: main web-app directory
  8. web_app_test: testing for web-app

Background and Related Works

  • Mnih et al. 2013 Playing Atari with Deep Reinforcement Learning4
  • Deep Reinforcement Learning, DeepMind Blog Post 201611
  • Schrittwieser et al. 2020 Mastering Atari, Go, chess and shogi by planning with a learned model12
  • MuZero: Mastering Go, Chess, Shogi, and Atari without Rules, DeepMind Blog Post 202013

Architecture

Learning Network Architecture

DQN-architecture

Streaming Service Architecture

streaming-architecture

Web-App Architecture

web-app-architecture

Cloud Architecture

cloud-architecture

Results

gameresults_1 gameresults_2

Presentation

Google Slide Presentation

References

  1. OpenAI Gym
@misc{1606.01540,
  Author = {Greg Brockman and Vicki Cheung and Ludwig Pettersson and Jonas Schneider and John Schulman and Jie Tang and Wojciech Zaremba},
  Title = {OpenAI Gym},
  Year = {2016},
  Eprint = {arXiv:1606.01540},
}
  1. Arcade Learning Environment
@Article{bellemare13arcade,
    author = {{Bellemare}, M.~G. and {Naddaf}, Y. and {Veness}, J. and {Bowling}, M.},
    title = {The Arcade Learning Environment: An Evaluation Platform for General Agents},
    journal = {Journal of Artificial Intelligence Research},
    year = "2013",
    month = "jun",
    volume = "47",
    pages = "253--279",
}
  1. Keras-RL
@misc{plappert2016kerasrl,
    author = {Matthias Plappert},
    title = {keras-rl},
    year = {2016},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/keras-rl/keras-rl}},
}
  1. Playing Atari with Deep Reinforcement Learning, Mnih et al., 2013
  2. Deep Reinforcement Learning with Double Q-learning, van Hasselt et al., 2015
  3. Continuous Deep Q-Learning with Model-based Acceleration, Gu et al., 2016
  4. Dueling Network Architectures for Deep Reinforcement Learning, Wang et al., 2016
  5. Prioritized Experience Replay, Schaul et al., 2016
  6. Rainbow: Combining Improvements in Deep Reinforcement Learning, Hessel et al., 2017
  7. Noisy Networks for Exploration, Fortunato et al., 2018
  8. Deep Reinforcement Learning, DeepMind Blog Post 2016
  9. Schrittwieser et al. 2020 Mastering Atari, Go, chess and shogi by planning with a learned model
  10. MuZero: Mastering Go, Chess, Shogi, and Atari without Rules, DeepMind Blog Post 2020

Licensing

  • MIT License