The repository "Deep RL from scratch" contains implementations and examples showcasing various Deep Reinforcement Learning (DRL) algorithms built entirely from scratch. This means that the implementation of DRL algorithms from scratch involves building neural networks, defining reward structures, and handling the reinforcement learning pipeline without using pre-built components.
The "Deep RL from Scratch" repository comprises implementations and examples showcasing various Deep Reinforcement Learning (DRL) algorithms built entirely from scratch. This involves constructing neural networks, defining reward structures, and managing the reinforcement learning pipeline without pre-built components.
-
Deep Q-Network (DQN):
- A foundational algorithm combining Q-learning with deep neural networks. For the main paper, please refer to Mnih et al., 2013.
-
Proximal Policy Optimization (PPO):
- A stability-focused policy optimization algorithm. For detailed information, please see Schulman et al., 2017.
-
Trust Region Policy Optimization (TRPO):
- A conservative policy optimization algorithm. Refer to Schulman et al., 2017 for the primary paper.
-
Advantage Actor-Critic (A2C):
- An asynchronous version of the classic actor-critic algorithm. For foundational details, check Mnih et al., 2016.
-
A3C (Asynchronous Advantage Actor-Critic):
- Combining actor-critic methods with asynchronous training. For further insights, see Mnih et al., 2016.
-
Deep Deterministic Policy Gradients (DDPG):
- Designed for continuous action spaces. For primary insights, refer to Lillicrap et al., 2016.
-
Twin Delayed DDPG (TD3):
- An extension of DDPG, addressing overestimation bias and instability. Please see Fujimoto et al., 2018 for more details.
-
Soft Actor-Critic (SAC):
- An off-policy algorithm for continuous action spaces. For comprehensive understanding, check Haarnoja et al., 2018.
-
Categorical DQN (C51):
- A variant of DQN modeling Q-value distribution. Refer to Bellemare et al., 2017 for detailed insights.
-
Deep SARSA (State-Action-Reward-State-Action):
- Combining Q-learning with experience replay. Please refer to van Hasselt et al., 2016 for foundational details.
These algorithms offer diverse approaches for solving reinforcement learning problems, each with specific strengths and weaknesses. For further information, the respective primary papers have been provided.
These algorithms cover a spectrum of approaches for solving reinforcement learning problems and addressing challenges in different environments and settings. Each algorithm has its strengths and weaknesses, making them suitable for specific scenarios and tasks in the realm of Deep Reinforcement Learning.