Early stopping and dynamic learning rate reduction
Closed this issue · 1 comments
mdraw commented
- Support ReduceLROnPlateau in
StoppableTrainer
- Implement an early stopping criterion in addition to maximum time and maximum number of iterations in
StoppableTrainer
. This could work similarly toReduceLROnPlateau
(except instead of reducing learning rate, training is terminated when no improvement is observed for a long time).
mdraw commented
This issue is kind of obsolete now that we have support for quasi-periodic learning rate schedules, which never reach an easily identifiable point where there is no more possible improvement (especially if the parameter snapshots are used for stochastic weight averaging or ensembling, where diversity matters).
Contributions are still welcome in this regard if they work well with classical learning rate schedules, but I currently don't plan on implementing such a feature myself.