/Deep-Learning-Experiments

Notes and experiments to understand deep learning concepts

Primary LanguagePythonMIT LicenseMIT

Deep Learning: Theory and Experiments

Notes

  1. Background Materials
  1. Machine Learning Basics
  1. Deep Neural Networks
  1. Regularization
  2. Optimization
  3. Convolutional Neural Networks
  4. Embeddings
  5. Recurrent Neural Networks, LSTM, GRU

Tensorflow (tf) Experiments

  1. Hello World!
  2. Linear Algebra
  3. Matrix Decomposition
  4. Probability Distributions using TensorBoard
  5. Linear Regression by PseudoInverse
  6. Linear Regression by Gradient Descent
  7. Under Fitting in Linear Regression
  8. Optimal Fitting in Linear Regression
  9. Over Fitting in Linear Regression
  10. Nearest Neighbor
  11. Principal Component Analysis
  12. Logical Ops by a 2-layer NN (MSE)
  13. Logical Ops by a 2-layer NN (Cross Entropy)
  14. NotMNIST Deep Feedforward Network: Code for NN and Code for Pickle
  15. NotMNIST CNN
  16. word2vec
  17. Word Prediction/Story Generation using LSTM. Belling the Cat by Aesop Sample Text Story

Keras on Tensorflow Experiments

  1. NotMNIST Deep Feedforward Network
  2. NotMNIST CNN
  3. DCGAN on MNIST