/udacity-Navigation

Navigation's project

Primary LanguageJupyter Notebook

Project 1: Navigation

Introduction

In this project, I trained an agent to navigate (and collect bananas!) in a large, square world.

Trained Agent

A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.

The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:

  • 0 - move forward.
  • 1 - move backward.
  • 2 - turn left.
  • 3 - turn right.

The task is episodic, and in order to solve the environment, our agent must get an average score of +13 over 100 consecutive episodes.

Getting Started

  1. Download the environment from one of the links below. You need only select the environment that matches your operating system:

    (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

    (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.

  2. Place the file in the DRLND GitHub repository, in the folder, and unzip (or decompress) the file.

  3. Run the first cell code in Report.ipynb to install a few packages or use pip install .

Instructions

To run the project just need run all code cell in Report.ipynb. When the training ends we save the network and run the agent with the best police.

model.py

In this file we create a network using pytorch. This network will be use on agent to maps states on action values. We used two hidden layer and relu with activation fuction. The hyperparameters used are shown on Report.ipynb.

dqn_agent.py

In the dqn_agent we create the Agent and the ReplayBuffer.

The ReplayBuffer is a class which store experience tuples, that class have functions add and sample. The function add add a new experience to memory and sample return a randomly sample a batch of experiences from memory.

The Agent is a class which simulates the agent, have a function to choose an action for given state , other to save experience in memory and learn with that experience using given batch.