This code can be used to analyze the worst-case linear convergence rate of first-order optimization algorithms applied to smooth strongly convex functions as described in the paper:
[1] A. Taylor, B. Van Scoy, L. Lessard, "Lyapunov Functions for First-Order Methods: Tight Automated Convergence Guarantees," Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4904-4913, 2018.
To run the code, simply download the code folder and add it to the path in Matlab.
For convenience, the data files produced by main.m
are contained in the data folder.
Note: This code requires YALMIP along with a suitable SDP solver (e.g., Sedumi, SDPT3, Mosek).
main.m
generates all the figures in [1] and illustrates how to use each function below.FixedStepMethod.m
calculates the worst-case linear convergence rate of a fixed-step method applied to a smooth strongly convex function.SteepestDescent.m
analyzes the steepest descent method using an exact line search.HeavyBallSubspaceSearch.m
analyzes the heavy-ball method using an exact 2-dimensional subspace search at each iteration.RestartedFGM.m
analyzes the standard fast gradient method in (Nesterov, 1983) where the iterations are restarted every N iterations.
The following code calculates the worst-case linear convergence rate of the following methods when applied to an L-smooth mu-strongly convex function:
- Gradient method with step-size 1/L
- Gradient method with step-size 2/(L+mu)
- Heavy-ball method
- Fast gradient method
- Triple momentum method
mu = 1; % strong convexity parameter
L = 10; % Lipschitz parameter of gradient
FixedStepMethod(mu,L);
% Results printed to screen:
%
% Method Rate
% GM (1/L) : 0.9000
% GM (2/(L+mu)) : 0.8182
% HBM : 0.8602
% FGM : 0.7518
% TMM : 0.6838