/macsrl

Project for a semi-centralized logic-based MARL reward shaping method that is scalable in the number of agents and evaluates it in multiple scenarios

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

Logic Based Reward Shaping for Multi-agent Reinforcement Learning (MARL)

This repository contains the implementation of the project described in this document.

This repository also includes the implementation of the learning-based synthesis algorithm described in this article which was developed by Alper Kamil Bozkurt from this repository.

The video rendering and recording is based on this gridworld repository.

Dependencies

The examples in this repository also require the following optional libraries for visualization:

Installation

To install the current release and install the CSRL codebase:

git clone https://github.com/IngyN/macsrl.git
cd macsrl
pip3 install .

Basic Usage of this repo

The main class for this repo is MultiControlSynthesis, it takes set of ControlSynthesis classes (based on the number agents), a GridMDP object, a OmegaAutomaton object representing the shared automaton with sharedoa=True for our method.

The Graphing is done by loading the saved episode returns then loaded in the Graphing Notebook. For the video rendering, we use the Annotation and the Plotterclasses the annotation.py and the plotter.py.

Basic Usage of CSRL

The package consists of three main classes GridMDP, OmegaAutomaton and ControlSynthesis. The class GridMDP constructs a grid-world MDP using the parameters shape, structure and label. The class OmegaAutomaton takes an LTL formula ltl and translates it into an LDBA. The class ControlSynthesis can then be used to compose a product MDP of the given GridMDP and OmegaAutomaton objects and its method q_learning can be used to learn a control policy for the given objective. For example,

Examples

The repository contains a couple of example IPython notebooks:

Animations of the case studies:

HTML representation of the Automatons: