This is a private learning repository for reinforcement learning techniques used in robotics.
- 神经网络基础: 反向传播推导与卷积公式 [Zhihu] [Github]
- 强化学习基础 Ⅰ:马尔可夫与值函数 [Zhihu] [Github]
- 强化学习基础 Ⅱ:动态规划,蒙特卡洛,时序差分 [Zhihu] [Github]
- 强化学习基础 Ⅲ:on-policy, off-policy & Model-based, Model-free & Rollout [Zhihu] [Github]
- 强化学习基础 Ⅳ:State-of-the-art 强化学习经典算法汇总 [Zhihu] [Github]
- 强化学习基础 Ⅴ:Q learning 原理与实战 [Zhihu]
- 强化学习基础 Ⅵ:DQN 原理与实战 [Zhihu]
- 强化学习基础 Ⅶ:Double DQN & Dueling DQN 原理与实战 [Zhihu]
- 强化学习基础 Ⅷ:Vanilla Policy Gradient 策略梯度原理与 [Zhihu]
- 强化学习基础 Ⅸ:一文读懂 TRPO 原理与实现 [Zhihu]
- 强化学习基础 Ⅹ:一文读懂两种 PPO 原理与实现 [zhihu]
- Model-Based RL Ⅰ:Dyna, MVE & STEVE [Zhihu]
- Model-Based RL Ⅱ:MBPO原理解读 [Zhihu]
- Model-Based RL Ⅲ:从源码读懂PILCO [Zhihu]
- 机器人学的概率方法——最大似然估计MLE与最大后验概率估计MAP [Zhihu]
- PR Ⅱ:贝叶斯估计/推断及其与MAP的区别 [Zhihu]
- PR Ⅲ:从高斯分布到高斯过程、高斯过程回归、贝叶斯优化 [Zhihu]
- PR Ⅳ:贝叶斯神经网络 Bayesian Neural Network [Zhihu]
- PR Ⅴ:熵、KL散度、交叉熵、JS散度及python实现 [Zhihu]
- PR Ⅵ:多元连续高斯分布的KL散度及python实现 [Zhihu]
- Meta-Learning: An Introduction Ⅰ [Zhihu] [Github]
- Meta-Learning: An Introduction Ⅱ [Zhihu] [Github]
- Meta-Learning: An Introduction Ⅲ [Zhihu] [Github]
- 模仿学习(Imitation Learning)入门指南 [Zhihu]
- Imitation Learning Ⅱ:DAgger透彻理论分析 [Zhihu]
- Imitation Learning Ⅲ:EnsembleDAgger 一种贝叶斯DAgger [Zhihu]
- RLfD Ⅰ:Deep Q-learning from Demonstrations 解读 [Zhihu]
- RLfD Ⅱ:Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance [Zhihu]
- End-to-End Robotic Reinforcement Learning without Reward Engineering: [Medium] [Github] [Zhihu]
- Overcoming Exploration in RL with Demonstrations: [Medium] [Github] [Zhihu]
- The Predictron: End-To-End Learning and Planning: [Zhihu] [Github]
- IROS2019 Paper速读(一): [Zhihu] [Github]
- IROS2019 Paper速读(二): [Zhihu] [Github]
- IROS2019 Paper速读(三): [Zhihu] [Github]
- IROS2019 Paper速读(四): [Zhihu] [Github]
- 【重磅综述】如何在少量尝试下学习机器人强化学习控制 [Zhihu]
MuJoCo自定义机器人建模指南 [Zhihu]