/RAdam

On The Variance Of The Adaptive Learning Rate And Beyond

Primary LanguagePythonApache License 2.0Apache-2.0

RAdam

In this paper, we study the problem why we need warmup for Adam and identifies the adaptive learning rate has an undesirably large variance in the early stage. In principle,

We are in an early-release beta. Expect some adventures and rough edges.

Detailed readme is still in process.

How to use guidance:

  1. Directly replace Adam with RAdam first without changing any settings (if Adam works with some setting, it's likely RAdam also works with that). it is worth mentioning that, if you are using Adam with warmup, try RAdam with warmup first (instead of RAdam without warmup).
  2. Further tune hyper-parameters for a better performance.