/tooling-comparison

Docs and tests comparing RDF and OWL tooling

Primary LanguagePythonCreative Commons Zero v1.0 UniversalCC0-1.0

tooling-comparison

Docs and tests comparing RDF and OWL tooling.

Results

Task horned-owl py-horned-owl rapper rdftab-thick rdftab-thin robot
convert-ontology PASS PASS PASS FAIL PASS PASS
extract-labels FAIL PASS PASS FAIL PASS PASS
read-ontology PASS PASS FAIL PASS PASS

Tools

These are relevant tools that we will consider testing:

Tasks

These are some relevant tasks we will consider including:

  • read ontology: e.g. robot annotate
  • convert ontology: e.g. robot convert
  • extract labels: e.g. robot export
  • extract term: e.g. robot extract
  • get subclasses of a term
    • direct children
    • descendants
    • direct parents
    • ancestors
  • get interesting axioms from a term
  • run an interesting query
  • update label: e.g. robot rename
  • update axiom

Run

The various tools are packaged in a Docker container that builds on odkfull. Docker is the only requirement, usually installed with Docker Desktop

First you need to build the Docker image:

docker build --tag ontodev-tooling-comparison .

Then you run the tests with:

docker run --rm -v $(pwd):/work -w /work ontodev-tooling-comparison

See the result/ directory for detailed results.

Design

Each task is described in a Markdown file. For each task we create a result/task/ directory. For each tool we create a result/task/tool/ subdirectory.

Each task Markdown file includes a "Tools" section with subsections for each specific tool. Each subsection (when complete) contains a code block that defines a script for running that task with that specific tool. This code block will be used to generate a result/task/tool/test.sh script.

For each task and each tool we first generate the subdirectory and test.sh script then run /usr/bin/time -v sh test.sh > result.txt in the subdirectory. The various result.txt files can be used to compare performance.

For each task we generate a result/task/summary.tsv table. We also generate PASS/FAIL summary in result/summary.tsv.

Before running tasks we write some system information to result/system.txt.