Deep RL from scratch

Static Badge Static Badge

The repository "Deep RL from scratch" contains implementations and examples showcasing various Deep Reinforcement Learning (DRL) algorithms built entirely from scratch. This means that the implementation of DRL algorithms from scratch involves building neural networks, defining reward structures, and handling the reinforcement learning pipeline without using pre-built components.

Deep RL from Scratch

The "Deep RL from Scratch" repository comprises implementations and examples showcasing various Deep Reinforcement Learning (DRL) algorithms built entirely from scratch. This involves constructing neural networks, defining reward structures, and managing the reinforcement learning pipeline without pre-built components.

Important Deep RL Algorithms

  1. Deep Q-Network (DQN):

    • A foundational algorithm combining Q-learning with deep neural networks. For the main paper, please refer to Mnih et al., 2013.
  2. Proximal Policy Optimization (PPO):

    • A stability-focused policy optimization algorithm. For detailed information, please see Schulman et al., 2017.
  3. Trust Region Policy Optimization (TRPO):

  4. Advantage Actor-Critic (A2C):

    • An asynchronous version of the classic actor-critic algorithm. For foundational details, check Mnih et al., 2016.
  5. A3C (Asynchronous Advantage Actor-Critic):

    • Combining actor-critic methods with asynchronous training. For further insights, see Mnih et al., 2016.
  6. Deep Deterministic Policy Gradients (DDPG):

  7. Twin Delayed DDPG (TD3):

    • An extension of DDPG, addressing overestimation bias and instability. Please see Fujimoto et al., 2018 for more details.
  8. Soft Actor-Critic (SAC):

    • An off-policy algorithm for continuous action spaces. For comprehensive understanding, check Haarnoja et al., 2018.
  9. Categorical DQN (C51):

  10. Deep SARSA (State-Action-Reward-State-Action):

These algorithms offer diverse approaches for solving reinforcement learning problems, each with specific strengths and weaknesses. For further information, the respective primary papers have been provided.

These algorithms cover a spectrum of approaches for solving reinforcement learning problems and addressing challenges in different environments and settings. Each algorithm has its strengths and weaknesses, making them suitable for specific scenarios and tasks in the realm of Deep Reinforcement Learning.