Project 1: Navigation

Trained Agent

Introduction

In this project, we train an agent to navigate (and collect bananas!) in a large, square world.

A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.

The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:

  • 0 - move forward.
  • 1 - move backward.
  • 2 - turn left.
  • 3 - turn right.

The task is episodic. When the agent gets an average score of +13 over 100 consecutive episodes, the environment is considered to be solved.

Getting Started

  1. Follow the instructions in this link to install the required dependencies.

  2. Download the environment from one of the links below. You need only select the environment that matches your operating system:

    (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

    (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.

  3. Place the file in project folder, and unzip (or decompress) the file. Make sure the following line in the banana_world.py points to the correct location.

env = UnityEnvironment(file_name="./Banana_Linux/Banana.x86_64")

Instructions

The project has the following main files:

  • dqn_agent.py: code for the agent used in the environment
  • model.py: code containing the Q-Network used as the function approximator by the agent
  • dqn.pth: saved model weights for the original DQN model
  • ddqn.pth: saved model weights for the Double DQN model
  • ddqn.pth: saved model weights for the Dueling Double DQN model

You can run the banana_world.py script in training or testing mode. To train the agent, make sure the is_training variable is set to True in the banana_world.py file.
By default, the code is configured to run in testing mode using saved weights.

Enhancements

Several enhancements to the original DQN algorithm have also been incorporated: