/MathsDL-spring18

Topics course Mathematics of Deep Learning, NYU, Spring 18

MathsDL-spring18

Topics course Mathematics of Deep Learning, NYU, Spring 18. CSCI-GA 3033.

Logistics

  • Tuesdays from 7.10pm-9pm. CIWW 102

  • Tutoring Session with Parallel Curricula (optional): Fridays 11am-12:30pm CIWW 101. The first week only it is 10:30am-12pm.

  • Piazza: sign-up here

  • Office Hours: Tuesdays 9:30am-11:00am

Instructors

Lecture Instructor: Joan Bruna (bruna@cims.nyu.edu)

Tutor (Parallel Curricula): Cinjon Resnick (cinjon@nyu.edu)

Syllabus

This Graduate-level topics course aims at offering a glimpse into the emerging mathematical questions around Deep Learning. In particular, we will focus on the different geometrical aspects surounding these models, from input geometric stability priors to the geometry of optimization, generalisation and learning. We will cover both the background and the current open problems.

Besides the lectures, we will also run a parallel curricula (optional), which, starting from a landmark recent DL paper (AlphaGo), will trace back the fundamentals of Dynammic Programming, Policy Learning and Monte-Carlo Tree Search through the literature and lab materials.

Detailed Syllabus

  • Introduction: the Curse of Dimensionality

  • Part I: Geometry of Data

    • Euclidean Geometry: transportation metrics, CNNs , scattering.
    • Non-Euclidean Geometry: Hausdorff-Gromov distances, Graph Neural Networks.
    • Unsupervised Learning under Geometric Priors (Implicit vs explicit models, microcanonical, transportation metrics).
    • Applications and Open Problems: adversarial examples, graph inference, inverse problems.
  • Part II: Geometry of Optimization and Generalization

    • Stochastic Optimization (Robbins & Munro, Convergence of SGD)
    • Stochastic Differential Equations (Fokker-Plank, Gradient Flow, Langevin Dynamics, links with SGD; open problems)
    • Information Geometry and Optimal Transport (Amari, Fisher-Rao metric, Wasserstein)
    • Reproducing Kernel Hilbert Spaces
    • Landscape of Deep Learning Optimization (Tensor/Matrix factorization, Deep Nets; open problems).
    • Generalization in Deep Learning.

Pre-requisites

Multivariate Calculus, Linear Algebra, Probability and Statistics at solid undergraduate level.

Notions of Harmonic Analysis, Differential Geometry and Stochastic Calculus are nice-to-have, but not essential.

Grading

The course will be graded with a final project -- consisting in an in-depth survey of a topic related to the syllabus, plus a participation grade. The detailed abstract of the project will be graded at the mid-term.

Lectures

Week Lecture Date Topic References
1 1/23 Lec1 Introduction: The Curse of Dimensionality in ML Slides References
2 1/30 Lec2 Euclidean Geometric Stability. Slides References
3 2/6 Guest Lecture: Leon Bottou (Facebook/NYU) Slides References
4 2/13 Lec3 Scattering Transforms and CNNs Slides References
5 2/20 Lec4 Non-Euclidean Geometric Stability. Gromov-Hausdorff distances. Graph Neural Nets Slides References
6 2/27 Lec5 Graph Neural Network Applications Slides References
7 3/6 Lec6 Unsupervised Learning under Geometric Priors. Implicit vs Explicit models. Optimal Transport models. Microcanonical Models. Open Problems Slides References
8 3/13 Spring Break References
9 3/20 Lec7 Discrete vs Continuous Time Optimization. Fokker-Plank. Langevin Dynamics. References
10 3/27 Lec8 Landscape of Deep Learning Optimization. Tensor factorization References
11 4/3 Lec9 Landscape of Deep Learning Optimization (cont'd). References
12 4/10 Lec10 Information Geometry. References
13 4/17 Lec11 Reproducing Kernel Hilbert Spaces References
14 4/24 Lec12 Optimal Transport in ML. Adversarial Training References
15 5/1 Lec13 Generalization. Review of Rademacher complexity. Stability. References

Lab sessions / Parallel Curricula

DeepStack living document: https://goo.gl/zzMzoz

AlphaGoZero living document: https://goo.gl/iFZ4XD

  • Class 6: The Paper

  • Class 5: MCTS & RL

    • Motivation: Up to this point, we’ve learned a lot about how games can be solved and how RL works on a foundational level. Before we jump into the paper, one last foray contrasting and unifying the games vs learning perspective is worthwhile for understanding the domain more fully.
    • Required Reading:
    • Optional Reading:
      • Vodopivec: Survey of research inspired by both fields → 5.
    • Questions:
      • What are key differences between MCTS and RL?
      • UCT can be described in RL terms as the following “The original UCT searches identically as an offline on-policy every-visit MC control algorithm that uses UCB1 as the policy.” What do each of these terms mean?
      • What is a Representation Policy? Give an example not described in the text.
      • What is a Control Policy? Give an example not described in the text.
  • Class 4: MCTS & UCT

    • Motivation: Monte Carlo Tree Search (MCTS) forms the backbone of AlphaGoZero. It’s what lets it reliably explore and then hone in on the best policy. UCT (UCB for Trees) builds on top of what we’ve been learning and, paired with MCTS, is integral to the training process.
    • Required Reading:
    • Optional Reading:
    • Questions:
      • Can you detail each of the four parts of the MCTS algorithm?
      • What characteristics make MCTS a good choice?
      • What are examples of domain knowledge default policies in Go?
      • Why is UCT optimal? Can you prove that the failure probability at the root converges to zero at a polynomial rate in the number of games simulated?
  • Class 3: Policy and Value Functions

    • Motivation: The Policy and Value Functions are at the core of Reinforcement Learning. The Policy function is the set of probabilities you give to each possible move. The Value function is your estimate of how good is the current state. In AlphaGoZero, a single network calculates both a value and a policy, then later updates its weights based off of the difference between those figures and the empirical results.
    • Required Reading (note: Sutton from here out refers to the final version. Make sure it says COMPLETE DRAFT):
      • Value Function:
        • Sutton 3.5, 3.6, 3.7
        • Sutton: 9.1, 9.2, 9.3 (important!)
      • Policy Function:
        • Sutton: 4.1, 4.2, 4.3
        • Sutton: 13.1, 13.2 (important!), 13.3, 13.4
    • Optional Reading:
    • Questions:
      • Why does policy gradient have such high variance?
      • What is the difference between off-policy and on-policy?
      • Sutton:
        • 3.13: What is the Bellman equation for action values, that is, for qπ? ...
        • 3.14: In the gridworld example … are the signs of these rewards important, or only the intervals between them? Prove ...
        • 3.15: Now consider adding a constant c to all the rewards in an episodic task … would this have any effect, or would it leave the task unchanged as in the continuing task above? Why or why not? Give an example.
        • 3.20: Give the Bellman equation for q∗ for the recycling robot.
        • 4.3: What are the equations analogous to (4.3), (4.4), and (4.5) for the action-value function qπ and its successive approximation by a sequence of functions q0, q1, q2, . . . ?
        • 4.6 (important!): How would policy iteration be defined for action values? Give a complete algorithm for computing q∗, analogous to that on page 65 for computing v∗. Please pay special attention to this exercise, because the ideas involved will be used throughout the rest of the book.
        • 13.2 (important!): Prove (13.7) using the definitions and elementary calculus.
  • Class 2: Multi-Armed Bandits and Upper Confidence Bounds

    • Motivation: Bandits and UCB are key components of how MCTS was originally formalized. The node selection during the search is achieved through the UCB approach, which is analogues to how its performed in a multi-armed bandit scenario.
    • Required Reading:
    • Optional Reading:
    • Questions:
      • Sutton Exercises 2.1, 2.2, 2.4, 2.5
      • Sutton: What are the pros and cons of the optimistic initial values method?
      • Kun: In the proof for the expected cumulative regret of UCB1, why is delta(T) a trivial regret bound if the deltas are all the same?
      • Kun: Do you understand the argument for why the regret bound is O(sqrt(KTlog(T)))?
      • Can you reproduce the UCB1 algorithm?
  • Class 1: Minimax and Alpha Beta Pruning

    • Motivation: These original core ideas did so much for the study of games. They spurred the field forward starting in the 50s and still to this day have mindshare in how to build a computer engine that beats games, including in popular chess engines like Stockfish.
    • Required Reading:
    • Optional Reading:
    • Questions:
      • (Knuth) Show that AlphaBetaMin(p, alpha, beta) = -AlphaBetaMax(p, -beta, -alpha). (p. 300)
      • (Knuth) For Theorem 1.(1), why are the successor positions of type 2? (p. 305)
      • (Knuth) For Theorem 1.(2), why is it that p’s successor position is of type 3 if p is not terminal?
      • (Knuth) For Theorem 1.(3), why is it that p’s successor positions are of type 2 if p is not terminal?
      • (Knuth) Show that the subparts of Theorem 2, are correct.