/Deep_Reinforcement_Learning-Atari

Deep Q-Network (DQN) to play classic Atari Games

Primary LanguagePython

Deep Reinforcement Learning

Deep Q-Network (DQN) to play classic Atari Games

3 Atari games (MsPacman, Boxing and Pong) are being tested with the same architecture and achieved decent performance.


The key details of the architecture is as follow:
State Space:
  • Environment observation is converted to greyscale and reduced in size (60 x 60) to conserve memory.
  • 4 consecutive frames are stacked together (60 x 60 x 4) in order to capture the motion.
Agent:
  • Convolutional neural network (CNN) is used to approximate Q-function.
  • input → conv (6 x 6 x 16) + stride 2 + RELU → conv (4 x 4 x 32) + stride 2 + RELU → flatten → hidden layer (256 units) + RELU → linear layer → state-action value function
Training:
  • 1 million environmental steps is used as the duration of training (This can be increased for better performance).
  • For more stable gradient update, several modifications are added as follow:
    • Experience replay to store transitions.
    • Separate stationary target network (updated every 5k steps).
    • Rewards is clipped to be between -1 and 1.
Note:
1. This project is from one of my modules (Advanced Topics in Machine Learning) at UCL, taught by Google DeepMind.
2. So, it is a smaller network with shorter training times than commonly used for accomodating the training without GPU.
3. Due to the requirement of the module, the codes are separated for each of the game but there is only minor differences.
4. The saved model for each games after training are included (Run the Load_Model.py file for each games to evaluate).
  • This performance is still far from optimal because it is only trained with 1 million environmental steps.



The required library:

  • TensorFlow
  • numpy
  • matplotlib
  • gym
  • random
  • skimage