/Deep-Learning-Experiments

Notes and experiments to understand deep learning concepts

Primary LanguageJupyter NotebookMIT LicenseMIT

Deep Learning Lecture Notes and Experiments

Code samples have links to other repo that I maintain (Advanced Deep Learning with Keras book) or contribute (Keras)

2020 Version

So much have changed since this course was offerred. Hence, it is time to revise. I will keep the original lecture notes at the bottom. They will no longer be maintained. I am introducing 2020 version. Big changes that will happen are as follows:

  1. Review of Machine Learning - Frustrated with the lack of depth in the ML part, I decided to develop a new course - Foundations of Machine Learning. Before studying DL, a good grasp of ML is of paramount importance. Without ML, it is harder to understand DL and to move it forward.

  2. Lecture Notes w/ Less Clutter - Prior to this version, my lecture notes have too much text. In the 2020 version, I am trying to focus more on the key concepts while carefully explaining during lecture the idea behind these concepts. The lecture notes are closely coupled with sample implementations. This enables us to quickly move from concepts to actual code implementations.

Lecture Notes and Experiments

  1. Course Roadmap
  1. Multilayer Perceptron (MLP)
  1. Convolutional Neural Network (CNN)

Star, Fork, Cite

If you find this work useful, please give it a star, fork, or cite:

@misc{atienza2020dl,
  title={Deep Learning Lecture Notes},
  author={Atienza, Rowel},
  year={2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/roatienza/Deep-Learning-Experiments}},
}

Lecture Notes (Old - will no longer be maintained)

  1. Course Roadmap
  1. Background Materials
  1. Machine Learning Basics
  1. Deep Neural Networks
  1. Regularization
  1. Optimization

  2. Convolutional Neural Networks (CNN)

  1. Deep Networks
  1. Embeddings
  1. Recurrent Neural Networks, LSTM, GRU
  1. AutoEncoders
  1. Generative Adversarial Networks (GAN)

11a. Improved GANs

11b. Disentangled GAN

11c. Cross-Domain GAN

  1. Variational Autoencoder (VAE)
  1. Deep Reinforcement Learning (DRL)
  1. Policy Gradient Methods

Warning: The following are old experiments that are no longer updated and maintained

Tensorflow Experiments

  1. Hello World!
  2. Linear Algebra
  3. Matrix Decomposition
  4. Probability Distributions using TensorBoard
  5. Linear Regression by PseudoInverse
  6. Linear Regression by Gradient Descent
  7. Under Fitting in Linear Regression
  8. Optimal Fitting in Linear Regression
  9. Over Fitting in Linear Regression
  10. Nearest Neighbor
  11. Principal Component Analysis
  12. Logical Ops by a 2-layer NN (MSE)
  13. Logical Ops by a 2-layer NN (Cross Entropy)
  14. NotMNIST Deep Feedforward Network: Code for NN and Code for Pickle
  15. NotMNIST CNN
  16. word2vec
  17. Word Prediction/Story Generation using LSTM. Belling the Cat by Aesop Sample Text Story

Keras on Tensorflow Experiments

  1. NotMNIST Deep Feedforward Network
  2. NotMNIST CNN
  3. DCGAN on MNIST