/Cyclical-Learning-Rates-for-Training-Neural-Networks-With-Unbalanced-Data-Sets

As the learning rate is one of the most important hyper-parameters to tune for training convolutional neural networks. In this paper, a powerful technique to select a range of learning rates for a neural network that named cyclical learning rate was implemented with two different skewness degrees. It is an approach to adjust where the value is cycled between a lower bound and upper bound. CLR policies are computationally simpler and can avoid the computational expense of fine tuning with fixed learning rate. It is clearly shown that changing the learning rate during the training phase provides by far better results than fixed values with similar or even smaller number of epochs.

Primary LanguageJupyter Notebook

Watchers