This repository contains the Python and MATLAB evaluation code for the George B. Moody PhysioNet Challenge 2023.
The evaluate_model
script evaluates the outputs of your models using the evaluation metric that is described on the webpage for the 2023 Challenge. This script reports multiple evaluation metrics, so check the scoring section of the webpage to see how we evaluate and rank your models.
You can run the Python evaluation code by installing the NumPy package and running the following command in your terminal:
python evaluate_model.py labels outputs scores.csv
where
labels
(input; required) is a folder with labels for the data, such as the training data on the PhysioNet webpage;outputs
(input; required) is a folder containing files with your model's outputs for the data; andscores.csv
(output; optional) is a collection of scores for your model.
You can run the MATLAB evaluation code by installing Python and the NumPy package and running the following command in MATLAB:
evaluate_model('labels', 'outputs', 'scores.csv')
where
labels
(input; required) is a folder containing files with the labels for the data, such as the training data on the PhysioNet webpage;outputs
(input; required) is a folder containing files with outputs produced by your model for the data; andscores.csv
(output; optional) is a collection of scores for your model.
Unable to run this code with your code? Try one of the example codes on the training data. Unable to install or run Python? Try Python, Anaconda, or your package manager.
Please see the Challenge website for more details. Please post questions and concerns on the Challenge discussion forum.