'monitor' is ignored to compute the score
AlessandroLovo opened this issue · 1 comments
AlessandroLovo commented
In the function ln.train_model
the score of a fold is returned as
score = np.min(history[return_metric])
This means that is monitor != return_metric
we early stop according to monitor
, but still take the minimum wrt return_metric
. So early stopping doesn't make much sense.
A better solution would be
score = history[return_metric][np.argmin(history[monitor])]
This may complicate matters with optimal_chekpoint
, as there to have consistency one should have metric == monitor
instead of metric == return_metric
AlessandroLovo commented
The confusion free scenario is when metric == return_metric == monitor
. Outside this condition things could get messy