Improve your state of the art by using best activation function and best meta optimizer
LifeIsStrange opened this issue · 5 comments
You could increase GPT 3 accuracy by using Ranger, which combine state of the art optimizers + gradient centralization
https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer
You seem to be using the Adam optimizer. It has been succeeded by RAdam (rectified Adam). Ranger will bring you this improvment and a lot more synergistic others, for free.
Hortogonally, you would probably benefit from Mish too instead of the one you use (Relu ?) but should be tested after Ranger as it could regress accuracy (even if unlikely)
https://github.com/digantamisra98/Mish
At the level these models are trained at, using a specific optimizer/activation will not necessarily get you better results.
Additionally considering GPT-3 size I would suggest not to use any optimizer above SGD because of the computation levels. Same goes for Mish.
@minimaxir it will not necessarily bring gains but it is still a low hanging fruit that should be tried.
@digantamisra98 RAdam (not the full Ranger package) does not increase computational cost.
I've read somewhere that Mish can be as efficient as Relu
Maybe with https://github.com/thomasbrandon/mish-cuda?
@LifeIsStrange everything above SGD is expensive.