/continuous-control

Solution to Unity Reacher environment with reinforcement learning algorithm DDPG

Primary LanguageJupyter NotebookMIT LicenseMIT

Continuous Control

Introduction

This project implements a deep reinforcement learning policy gradient algorithm Deep Deterministic Policy Gradient (DDPG) which can operate over continuous action spaces.

The environment

The algorithm is trained with Reacher environment.

Trained Agent

In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.

The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

The environment contains 20 identical agents, each with its own copy of the environment. The experiences of all agent is used to train a single model

Solving the Environment

The environment is considered solved, when the average (over 100 episodes) of average score of the 20 agents is at least +30.

Getting Started

  1. Download the environment.
wget https://s3-us-west-1.amazonaws.com/udacity-drlnd/P2/Reacher/Reacher_Linux_NoVis.zip

Note that with this agent you will not be able to watch the agent.

  1. Install requirements
pip install -r requirements.txt
  1. Start the notebook
jupyter notebook

Open the notebook Reacher-training.ipynb.

Licenses and acknowledgements

Author