[paper
] [dataset
] [evaluation webserver
] [BibTeX
]
This is the evaluator code for the paper "LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and Benchmark" presented at ICCV 2023. It can be used to evaluate semantic segmentation and panoptic segmentation predictions with the LaRS ground-truth annotations.
Currently only the GT of the validation set is publicly available. For evaluation on the LaRS test set, please submit your submissions through our evaluation server.
- Install requirements into your python environment
pip install -r requirements.txt
- For each of the evaluation tracks (semantic segmentation, panoptic segmentation) the evaluator expects a prediction root dir, where predictions will be placed.
Configure paths to the dataset and predictions root in config files for your version of LaRS (e.g. lars_test_semantic.yaml).
- Place the predictions of your methods into
<prediction_root_dir>/<method_name>
The method dir contains PNG files with predictions for all test images:- Semantic segmentation: The PNG file contains predicted segmentation masks, following the color coding of classes specified in the configuration file (e.g. lars_test_semantic.yaml).
- Panoptic segmentation: The PNG file contains RGB coded class and instance predictions. The format follows LaRS GT masks: class id is stored in the R component, while instance ids are stored in the G and B components.
- Run evaluation:
$ python evaluate.py path/to/config.yaml <method_name>
Result files with various statistics will be placed in the configured directory (results/v1.0.0/<track>/<method>
by default).
Results for semantic segmentation methods inlcude the following files:
summary.csv
: Overall results (IoU, water-edge accuracy, detection F1)frames.csv
: Per frame metrics (number of TP, FP and FN, IoU, ...)segments.csv
: Segment-wise results (TP coverage, FP area, FN area, ...)
Results for semantic segmentation methods inlcude the following files:
summary.csv
: Overall results (PQ, RQ, SQ, semantic metrics)frames.csv
: Per frame metricssegments.csv
: Segment-wise results (TPs, FPs, FNs, areas, bboxes)segments_agnostic.csv
: Segment-wise results for obstacle-class-agnostic casesegments_sem.csv
: Segment-wise results from semantic segmentation evaluationobst_csl.csv
: Matched segments (GT and pred) categories and IoU -> for confusion matrixobst_cls_agnostic.csv
: Matched segments categories and IoU for obstacle-class-agnostic case
If you use LaRS, please cite our paper.
@InProceedings{Zust2023LaRS,
title={LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and Benchmark},
author={{\v{Z}}ust, Lojze and Per{\v{s}}, Janez and Kristan, Matej},
booktitle={International Conference on Computer Vision (ICCV)},
year={2023}
}