/STA410_W24_StatisticalComputation

Third re-build of STA410 Statistical Computation / STA2102 Computational Techniques in Statistics

Primary LanguageJupyter Notebook

STA410_W24_StatisticalComputation

Third re-build of STA410 Statistical Computation / STA2102 Computational Techniques in Statistics

Course Topics

  1. Sampling: Inverse CDF, Rejection, and Importance Sampling
    1. Lecture Notebook
    2. Coding Demo: python speed
    3. Homework: Modulus Recursion
    4. Extra Coding: New to python?
    5. Extra Reading: Pseudorandomnes and Floating-point numbers
  2. Estimation: Monte Carlo (MC) integration, estimation error, improving efficiency, antithetic sampling and control variates (correlation)
    1. Lecture Notebook
    2. Coding Demo: Adaptive Squeezed Rejection Sampling
    3. Homework: Integration Estimation
    4. Extra Reading: Integral Approximation
    5. Extra Reading: Importance Sampling Bias
    6. Extra Coding: Importance Sampling Hidden Markov Models (HMMs)
    7. Extra Reading: HMMs
  3. Markov Chain Monte Carlo (MCMC): High dimensional integration, Gibbs Sampling, Slice Sampling, Metropolis-Hastings, PyMC, Hamiltonian Monte Carlo (HMC)
    1. Lecture Notebook
    2. Coding Demo: Hamiltonian Monte Carlo with PyMC
    3. Homework: Probabilistic Programming
    4. Extra Coding: PyMC python
    5. Extra Reading: MCMC Diagnostics and Theory
  4. Numerical precision and error and condition and linear algebra (floating point behaviour and SVD)
    1. Lecture Notebook
    2. No Coding Demo this week and we'll have a long lecture instead; the prerequesite reading becomes important for the end of this lecture and relevance continues into future material; what was being considered for the Coding Demo has instead just remained as part of the Homework [so the homework is a little longer in length than usual]
    3. Prerequesites: Linear Algebra
    4. Homework: Numerical Precision for Means and Variances
    5. Extra Reading: Analog versus Digital Arithmatic
  5. Linear Algebra: SVD/PCA/ICA/PRC, Condition, Regression VIFs, and Matrix Decompositions for Least Squares
    1. Prerequesites: Linear Algebra [Still (or now actually probably Even More) applicable compared to Last Week...]
    2. Lecture Notebook
    3. Coding Demo: Least Squares
    4. Homework: Randomized Linear Algebra
    5. Extra Coding: Gram-Schmidt and the Cholesky
    6. Extra Coding: More Least Squares
    7. Extra Reading: Computational Speed and Complexity
    8. Extra Reading: Matrix Condition Numbers
  6. Coding Challenge
  7. Reading Week
  8. Midterm
  9. From (Week 5) Direct Methods to Iterative Methods: Gauss-Seidel (GS), Successive Overrelaxation, Coordinate Descent (AKA Nonlinear GS), and Gradient Descent and AutoDiff
    1. Coding Demo: Splines, smoothing matrices (lowess/loess), generalized additive models (GAMs)
      [including some extra broader contextual material on basis functions and regularization and penalty functions]
    2. Lecture Notebook
    3. Homework: Gradient Descent
    4. Extra Reading: Line Search to find optimal step sizes and Conjugate Gradient Descent
    5. Extra Coding: Conjugate Gradient Descent
    6. Extra Reading: Function Spaces
    7. Extra Coding: Lagrange Polynomial Interpolation
  10. Optimization, Hessians and Jacobians, Gauss-Newton, Maximum Likelihood Estimation (score function, etc.) and Fisher Scoring and Newton's Method
    1. Lecture Notebook
    2. (+ iii) Coding Demo / Homework Notebook: classical optimization methods in TensorFlow
      (with Nonlinear Gauss-Seidel, Gradient Descent, Gauss-Newton, Fisher Scoring, and Newton's Method)
    3. ^
    4. Extra Reading: Variants on Newton's Method and Convergence Considerations
    5. Extra Coding: Newton's Method versus Secant, Fixed-Point Iteration, etc.
  11. Newton's Method Sandwich Estimators and IRLS (iteratively reweighted keast squares) (including M and Quasi-Likelihood estimation)
    1. Lecture Notebook
    2. STRONGLY Recommended Extra Reading: Modern Optimizers are Newton's Method simplificaitons
    3. Homework: Logistic Regression via IRLS
    4. Coding Demo: Gauss-Newton
    5. Extra Coding: Huber Loss
  12. Variational Inference, EM algorithm, Deep Learning (no Constrained optimization)
  13. Coding Challenge
  14. Final