/Deep-Learning-Experiments

Notes and experiments to understand deep learning concepts

Primary LanguagePythonMIT LicenseMIT

Deep Learning: Theory and Experiments

Sample codes have links to other repo that I maintain (Advanced Deep Learning with Keras book) or contribute (Keras)

Lecture Notes

  1. Course Roadmap
  1. Background Materials
  1. Machine Learning Basics
  1. Deep Neural Networks
  1. Regularization
  1. Optimization

  2. Convolutional Neural Networks (CNN)

  1. Deep Networks
  1. Embeddings
  2. Recurrent Neural Networks, LSTM, GRU
  1. AutoEncoders
  1. Generative Adversarial Networks (GAN)
  1. Variational Autoencoder (VAE)
  1. Deep Reinforcement Learning (DRL)

Warning: The following are old experiments that are longer updated and maintained

Tensorflow Experiments

  1. Hello World!
  2. Linear Algebra
  3. Matrix Decomposition
  4. Probability Distributions using TensorBoard
  5. Linear Regression by PseudoInverse
  6. Linear Regression by Gradient Descent
  7. Under Fitting in Linear Regression
  8. Optimal Fitting in Linear Regression
  9. Over Fitting in Linear Regression
  10. Nearest Neighbor
  11. Principal Component Analysis
  12. Logical Ops by a 2-layer NN (MSE)
  13. Logical Ops by a 2-layer NN (Cross Entropy)
  14. NotMNIST Deep Feedforward Network: Code for NN and Code for Pickle
  15. NotMNIST CNN
  16. word2vec
  17. Word Prediction/Story Generation using LSTM. Belling the Cat by Aesop Sample Text Story

Keras on Tensorflow Experiments

  1. NotMNIST Deep Feedforward Network
  2. NotMNIST CNN
  3. DCGAN on MNIST