/Battle-Of-DQNs

A comparision between the performances of DQN and several of its variants using PyTorch and Pong.

Primary LanguagePython

Batlle Of Deep Q-Nets

This repo contains implementations of Deep Q Network(DQN) and its variants: Double DQN and Duelling DQN. A variant which uses a duelling architecture and calculates loss in Double DQN style is also included. For evaluating performance, PongDeterministic-v4 was used since it converges very fast.

References:

Code Reference:

The code is highly inspired from https://github.com/higgsfield/RL-Adventure. The network architecture and hyperparameters are directly borrowed from this repo.

Rewards vs Episodes:

reward_curve

Loss vs Episodes:

loss_curve