microsoft/InnerEye-DeepLearning

Inaccaurate documentation for evaluating against pre-trained models

peterhessey opened this issue · 1 comments

Is there an existing issue for this?

  • I have searched the existing issues

Issue summary

Some documentation is lacking / inaccurate for evaluating against pre-trained models.

What documentation should be provided?

  1. The evaluation code uses a config based on the -–model parameter, not the config provided by model_id. This should be clarified.
  2. The datasets that you use for evaluation needs to have at least 3 subjects to be able to have at least 1 for training, 1 for validation and 1 for testing. This is quite cumbersome because our users probably want to evaluate on all their data. Documentation should be clearer on how to do this (inference service / other workarounds?).
  3. Clarify that the validation the evaluation dataset needs to have the same structures as the model, or the checks will fail.

example command that currently works:

python ./InnerEye/ML/runner.py --azure_dataset_id <dataset_id> --model <model_class_name> --model_id <azure_model_id>:<version> --azureml --train False --restrict_subjects=1,1,1 --check_exclusive=False

AB#8800

Additionally, there is a lack of documentation on how to run inference locally, as per this discussion: #842