/Minigrid_HCI-project

Train agents on MiniGrid from human demonstrations using Inverse Reinforcement Learning

Primary LanguagePython

Inverse Reinforcement Learning on Minigrid

The aim of this project is to provide a tool to train an agent on Minigrid. The human player can make game demonstrations and then the agent is trained from these demonstrations using Inverse Reinforcement Learning techniques.

The IRL algorithms are based on the following paper: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations [1].

Usage Notes

Installation

  1. git clone this repo
  2. conda create -n venv_irl python=3.9
  3. conda activate venv_irl
  4. Install pytorch
  5. conda install -c anaconda pyqt
  6. cd to repo root
  7. pip install -e .

Collect data and train an aagent

  1. Run python agents_window.py
  2. Press Add environment, then select from the drop-down
  3. Press Create new agent
  4. Add some demonstrations:
    1. Press New game
    2. Control the agent: WASD/arrow keys to move, 'p' to pickup, 'o' to drop, 'i' or space to interact (e.g. with a door), backspace/delete to reset
      1. Commands are defined in play_minigrid.py, qt_key_handler
    3. When you finish the episode, the Save button will activate -- press it to save the demonstration
    4. Collect at least 2 demonstrations
  5. Press the -> button next to each demonstration to add it to the list that will be used for training (rank as described)
  6. Press Train to start training
  7. To see training progress and example runs, press "Info" next to the agent in the Agents list
  8. See training plots:
    1. In a terminal, cd to repo root
    2. run tensorboard --logdir data --port 6006
    3. In a browser, navigate to localhost:6006

MiniGrid environment

Gym-minigrid [2] is a minimalistic gridworld package for OpenAI Gym.

There are many different environments, you can see some examples below.

The red triangle represents the agent that can move within the environment, while the green square (usually) represents the goal. There may also be other objects that the agent can interact with (doors, keys, etc.) each with a different color.

Alt Text

Graphical Application

The graphical interface allows the user to create, order and manage a set of games n order to create an agent that shows a desired behavior. Below you can see the application windows.

Initial window

Choose an environment to use

Alt Text

Agents management

Browse list of created agents

Alt Text

New agent

Add demonstrations and create a new agent

Alt Text

Agent details

Check trained agent

Alt Text

Neural Networks

Reward Neural Network

Architecture of the Reward Neural Network:

  • input: MiniGrid observation
  • output: reward

Trained with T-REX loss. [1]

Alt Text

Policy Neural Network

Architecture of the Policy Neural Network:

  • input: MiniGrid observation
  • output: probability distribution of the actions

Trained with loss: -log(action_probability) * discounted_reward

Alt Text

Experiments & results

We made a set of demonstrations to try to get the desired behavior shown on the left in the image below.

Next, the heatmaps of the rewards given by the trained reward network are shown. The different heatmaps represent different directions of the agent, in order: up, right, down, left.

Alt Text

Run the project

  • go to the directory in which you have downloaded the project
  • go inside Minigrid_HCI-project folder with the command: cd Minigrid_HCI-project
  • run the application with the command python agents_window.py

References

[1] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum. Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations. (Jul 2019) T-REX

[2] Chevalier-Boisvert, Maxime and Willems, Lucas and Pal, Suman. Minimalistic Gridworld Environment for OpenAI Gym, (2018) GitHub repository Gym-minigrid