Controlling a Spaceship using Hindsight Experience Replay (a.k.a HER)
This research is based on the paper Hindsight Experience Replay submitted on Jul 5th, 2017 by OpenAI Researchers.
I wrote a series of Medium articles trying to demystify this algorithm, where I describe my journey during the reaserch.
I'm using Deep Q-Network with Double DQN and Dueling Network Architecture.
Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum.
We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.
- Hindsight Experience Replay
- DHER: Hindsight Experience Replay for Dynamic Goals
- Hindsight policy gradients
- Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
- Advances in Experience Replay
- Curriculum-guided Hindsight Experience Replay
- Soft Hindsight Experience Replay
- Reinforcement Learning with Hindsight Experience Replay
- Learning from mistakes with Hindsight Experience Replay
- Advanced Exploration: Hindsight Experience Replay
- Understanding DQN+HER