Jupyter notebook demos for reading on Stochastic Gradient Langevin Dynamics.
The demos explore the basics of gradient descent, and Langevin dynamics algorithms. It analyzes effect of parameters (e.g., temperature and standard deviation of stochastic gradient) and implementation schemes (e.g., whether the stochastic gradient is simulated or uses a finite difference scheme) on performance of algorithms. It also has a stage-wise visualization tool that shows how the learning behavior of algorithms develop over iterations.
For undergraduate research course APMA E3900 at Columbia University.
Author: Kaiwen Zhang
Email: kz2387@columbia.edu
Supervisor: Prof. Kui Ren