/minimalRL

Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)

Primary LanguagePythonMIT LicenseMIT

minimalRL-pytorch

Implementations of basic RL algorithms with minimal lines of codes! (PyTorch based)

  • Each algorithm is complete within a single file.

  • Length of each file is up to 100~150 lines of codes.

  • Every algorithm can be trained within 30 seconds, even without GPU.

  • Envs are fixed to "CartPole-v1". You can just focus on the implementations.

Algorithms

  1. REINFORCE (67 lines)
  2. Vanilla Actor-Critic (98 lines)
  3. DQN (112 lines, including replay memory and target network)
  4. PPO (119 lines, including GAE)
  5. DDPG (145 lines, including OU noise and soft target update)
  6. A3C (129 lines)
  7. ACER (149 lines)
  8. A2C (188 lines)
  9. SAC (171 lines) added!!
  10. PPO-Continuous (161 lines) added!!
  11. Vtrace (137 lines) added!!
  12. Any suggestion ...?

Dependencies

  1. PyTorch
  2. OpenAI GYM ( > 0.26.2 IMPORTANT!! No longer support for the previous versions)

Usage

# Works only with Python 3.
# e.g.
python3 REINFORCE.py
python3 actor_critic.py
python3 dqn.py
python3 ppo.py
python3 ddpg.py
python3 a3c.py
python3 a2c.py
python3 acer.py
python3 sac.py