/deep-math

An overview of numerical and statistical methods used in deep learning.

Primary LanguageTeX

Deep Learning: A Mathematical Overview

An overview of deep learning as it is today from a mathematical perspective. We start with the fundamentals of learning, then go on to analyze how different tasks are learnt, what data is used to learn such tasks, and how to better understand said data. This project is by no means comprehensive. It is intended to simply provide be a high level overview of some of the mathematical foundations of deep learning. Feel free to create issues for proposed additions and corrections.

Contents

  • Neural Networks: An Overview
    • Autoencoders and Representation Learning
    • Convolutional Neural Networks
    • Other Notable Architectures
  • Learning in Neural Networks
    • 1st Order Optimization
    • 1st Order Update Schemes
    • 2nd Order Optimization
  • Measuring Learning: Losses
    • Regression Losses
    • Distributions and Cross Entropy
    • Kullback-Leibler Divergence
    • Wasserstein Metric
    • Earth Mover's Distance
  • Learning to Learn: Bayesian Optimization
    • Building a Surrogate: Gaussian Processes
    • Common Covariance Kernels
    • Kernels and Hyper-hyper Parameter Optimization
    • Exploration and Exploitation: The Acquisition Function
    • Inference and Markov Chain Monte Carlo
    • Metropolis-Hastings
  • Understanding Data: Spectral Analysis
    • Characterizing Signals
    • Fourier Transform
    • Short Time Fourier Transform
    • Wavelet Transform
    • Applications in Deep Learning
  • Understanding Datasets: Bayes Error
    • Estimating the Bayes Error Rate

Authors