/ContinuousControl

Teaching a robot arm to reach a ball with Deep Reinforcement learning

Primary LanguageJupyter Notebook

Project 2: Continuous Control

Project Details

For this project, you will work with the Reacher environment.

Trained Agent

In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.

The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm.
Each action is a vector with 4 numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.

Getting Started

  1. Clone the repo git clone https://github.com/dhaw92/ContinuousControl
  2. Create a conda environment and install the required packages
conda create --name cc python=3.6
source activate cc
pip install torch
pip install unityagents==0.4.0 
pip install mlagents 
  1. Download the environment from one of the links below. You need only select the environment that matches your operating system:

    (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

    (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link (version 1) or this link (version 2) to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)

  2. Place the file under the repository folder ContinuousControl/, and unzip (or decompress) the file.

Instructions

Follow the instructions in Continuous_Control.ipynb to get started with training your own agent!