Offline validation/evaluation of trained models
Closed this issue · 1 comments
mdraw commented
Validation, preview predictions etc. are currently tied to the Trainer
class and are only run periodically during training. This code should be made re-usable for offline (out of the training loop) evaluation of models. It needs to be easy to compare different model snapshots on a user-defined validation data set (calculating metrics and optionally visualizing inference results).
This should also be shown in an example script.
mdraw commented
Offline validation already exists in https://github.com/ELEKTRONN/elektronn3/blob/ax_hyperopt2/examples/hyperopt/train_with_ax.py#L39-L76 and needs to be ported to out of the script into the elektronn3 lib.