For this project, you will train an agent to navigate (and collect bananas!) in a large, square world.
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0
- move forward.1
- move backward.2
- turn left.3
- turn right.
To set up your python environment to run the code in this repository, follow the instructions below.
-
Create (and activate) a new environment with Python 3.6.
- Linux or Mac:
conda create --name drlnd python=3.6 source activate drlnd
- Windows:
conda create --name drlnd python=3.6 activate drlnd
-
Follow the instructions in this repository to perform a minimal install of OpenAI gym.
-
Clone the repository (if you haven't already!), and navigate to the
python/
folder. Then, install several dependencies.
git clone https://github.com/amitkverma/udacity-reinforcement-learning-navigation.git
cd udacity-reinforcement-learning-navigation
- Create an IPython kernel for the
drlnd
environment.
python -m ipykernel install --user --name drlnd --display-name "drlnd"
-
Change the kernel to match the
drlnd
environment by using the drop-downKernel
menu. -
Download the environment from one of the links below. You need only select the environment that matches your operating system:
- Linux: click here
- Mac OSX: click here
- Windows (32-bit): click here
- Windows (64-bit): click here
(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.
(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
-
Place the file in this folder, unzip (or decompress) the file and then write the correct path in the argument for creating the environment under the notebook
Double_DQN_Navigation.ipynb
:
env = env = UnityEnvironment(file_name="Banana.app")
.
├── images # Supporting images
├── checkpoint # Contains the saved models
│ ├── duel_dqn.pth # Saved model weights for the Dueling Double DQN model
│ ├── double_dqn.pth # Saved model weights for the Double DQN model
│ ├── prioritize_dqn.pth # Saved model weights for the Prioritize DQN model
├── results # Contains images of result
│ ├── duel_dqn_result.png # Result for the Dueling Double DQN model
│ ├── double_dqn_result.png # Result for the Double DQN model
│ ├── prioritize_dqn_result.png # Result for the Prioritize DQN model
├── Dueling_DQN_Navigation.ipynb # Notebook with solution using Dueling Double DQN model
├── Double_DQN_Navigation.ipynb # Notebook with solution using Double DQN model
├── Prioritized_DQN_Navigation.ipynb # Notebook with solution using Prioritized DQN model
├── Navigation.ipynb # Explore the unity environment
Follow the instructions in Navigation.ipynb
to get started with training your own agent!
To watch a trained smart agent, Every notebook will have the section Model in action
run that section after loading the enviroment
. It will load the save model and start playing the game.
Plot showing the score per episode over all the episodes. The environment was solved in 1000 episodes.
Double DQN | Prioritize DQN | Dueling DQN |
---|---|---|