Inaccaurate documentation for evaluating against pre-trained models
peterhessey opened this issue · 1 comments
peterhessey commented
Is there an existing issue for this?
- I have searched the existing issues
Issue summary
Some documentation is lacking / inaccurate for evaluating against pre-trained models.
What documentation should be provided?
- The evaluation code uses a config based on the
-–model
parameter, not the config provided bymodel_id
. This should be clarified. - The datasets that you use for evaluation needs to have at least 3 subjects to be able to have at least 1 for training, 1 for validation and 1 for testing. This is quite cumbersome because our users probably want to evaluate on all their data. Documentation should be clearer on how to do this (inference service / other workarounds?).
- Clarify that the validation the evaluation dataset needs to have the same structures as the model, or the checks will fail.
example command that currently works:
python ./InnerEye/ML/runner.py --azure_dataset_id <dataset_id> --model <model_class_name> --model_id <azure_model_id>:<version> --azureml --train False --restrict_subjects=1,1,1 --check_exclusive=False
peterhessey commented
Additionally, there is a lack of documentation on how to run inference locally, as per this discussion: #842