/PyTorch_OLoptim

A PyTorch implementation of various Online & Stochastic optimization algorithms for deep learning

Primary LanguagePython

PyTorch_OLoptim

A Pytorch implementation of various Online / Stochastic optimization algorithms

Descriptions

FTRL: Follow the Regularized Leader

  • intro: a classic algorithm in online learning

FTML: [ICML 2017] Follow the Moving Leader in Deep Learning

SGDOL: [NeurIPS 2019] Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

STORM: [NeurIPS 2019] Momentum-Based Variance Reduction in Non-Convex SGD

EXP3: Exponential-weight algorithm for Exploration and Exploitation

UCB: Upper Confidence Bound algorithm

SGDPF

  • intro: a toy example to use gradient descent to automatically tune the learning rate. The name comes from 'SGD + parameter free'