/gym-subgoal-automata

Environments from the papers "Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning" and "Induction and Exploitation of Subgoal Automata for Reinforcement Learning" using OpenAI Gym API.

Primary LanguagePythonMIT LicenseMIT

Gym Subgoal Automata

Environments from [Toro Icarte et al., 2018] using OpenAI Gym API. This repository currently complements the code for [Furelos-Blanco et al., 2020] and [Furelos-Blanco et al., 2021], whose code is here.

  1. Installation
  2. Usage
  3. Acknowledgments
  4. References

Installation

To install the package, you just have to clone the repository and run the following commands:

cd gym-subgoal-automata
pip install -e .

We recommend you to use a virtual environment since the requirements of this package may affect your current installation. The setup.py file contains the current requirements for the code to run safely.

The learned subgoal automata are exported to .png using Graphviz. You can follow the instructions in the official webpage to install it.

Usage

The repository has implementations for the OfficeWorld and WaterWorld environments and different associated success/fail tasks. You can find the list of all tasks in the file gym_subgoal_automata/__init__.py. The following is an example of how to instantiate the OfficeWorld environment where the task is "deliver coffee to the office".

import gym
env = gym.make("gym_subgoal_automata:OfficeWorldDeliverCoffee-v0", params={"generation": "random", "environment_seed": 0})

You can use the method env.play() to use the environment with your keyboard using the w, a, s and d keys. In this task you have to observe f (coffee) and then g (office) while avoiding the n (plants/decorations).

Acknowledgments

We thank the authors of reward machines for open sourcing their code. The code in this repository is heavily based on theirs.

References