/multiobjectiveanalysis

Compares the results of several multi-objective MDP model checkers

Primary LanguagePython

Multi-objective Benchmark

This Python programming evaluates multi-objective numerical queries both by querying the result directly and by approximating it through multi-objective achievability queries. This helps us to get insight into how multi-objective model checking tools perform and attempt to find mistakes in their implementation.

Structure

  • models contains all models and multi-objective numerical properties that are analysed
  • results contains reference results, will also contain your own results if you run the tool
  • scripts contains the code to generate new results

Analysing our results

Here we list all directories which we published.

Settings

Can be found in settings.yml.

Used versions

Used environment

A Lenovo Thinkpad, with an Intel i7-13700H with 14 cores and 32GB of RAM, on Windows 11, build 22631.3447 which ran the experiments using WSL.

Replication

Prerequisites

Running the script

First of all, the project needs to be cloned: git clone https://github.com/Chickenpowerrr/multiobjectiveanalysis.git

After that, we can run the script by going into the scripts directory: cd multiobjectiveanalysis/scripts

We then need to install the Python packages: pip3 install -r requirements.txt

After that, we can run the script for the first time: python3 main.py

Most likely the program now crashes. The script has generated a settings.yml file. In here, you need to set the paths to the model checkers, java and spot. After that the program should work by simply running: python3 main.py