PyTorch and Tensorflow 2.0 implementation of state-of-the-art model-free reinforcement learning algorithms on both Openai gym environments and a self-implemented Reacher environment.
Algorithms include Soft Actor-Critic (SAC), Twin Delayed DDPG (TD3), Actor-Critic (AC/A2C), Proximal Policy Optimization (PPO), QT-Opt (including Cross-entropy (CE) Method), PointNet, Transporter, etc.
This repo only contains PyTorch Implementation.
Here is my Tensorflow 2.0 + Tensorlayer 2.0 implementation.
-
Two versions of Soft Actor-Critic (SAC) are implemented.
SAC Version 1:
sac.py
: using state-value function.paper: https://arxiv.org/pdf/1801.01290.pdf
SAC Version 2:
sac_v2.py
: using target Q-value function instead of state-value function. -
Twin Delayed DDPG (TD3):
td3.py
: implementation of TD3. -
Proximal Policy Optimization (PPO): Todo
-
Actor-Critic (AC) / A2C:
ac.py
: extensible AC/A2C, easy to change to be DDPG, etc.A very extensible version of vanilla AC/A2C, supporting for all continuous/discrete deterministic/non-deterministic cases.
-
Two versions of QT-Opt are implemented here.
-
PointNet for landmarks generation from images with unsupervised learning is implemented here. This method is also used for image-based reinforcement learning as a STOA algorithm, called Transporter.
original paper: Unsupervised Learning of Object Landmarksthrough Conditional Image Generation
paper for RL: Unsupervised Learning of Object Keypointsfor Perception and Control
python ***.py
If you meet problem "Not imlplemented Error", it may be due to the wrong gym version. The newest gym==0.14 won't work. Install gym==0.7 or gym==0.10 with pip install -r requirements.txt
.
- SAC for gym Pendulum-v0:
SAC with automatically updating variable alpha for entropy:
SAC without automatically updating variable alpha for entropy:It shows that the automatic-entropy update helps the agent to learn faster.
- TD3 for gym Pendulum-v0:
TD3 with deterministic policy:
TD3 with non-deterministic/stochastic policy:It seems TD3 with deterministic policy works a little better, but basically similar.
- AC for gym CartPole-v0:
However, vanilla AC/A2C cannot handle the continuous case like gym Pendulum-v0 well.
To cite this repository:
@misc{rlalgorithms,
author = {Zihan Ding},
title = {SOTA-RL-Algorithms},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/quantumiracle/SOTA-RL-Algorithms}},
}