Hands-On Deep Learning Algorithms With Python
Master Deep Learning Algorithms with Extensive Math by Implementing them using TensorFlow
About the book
Deep learning is one of the most popular domains in the artificial intelligence (AI) space, which allows you to develop multi-layered models of varying complexities. This book is designed to help you grasp things, from basic deep learning algorithms to the more advanced algorithms. The book is designed in a way that first you will understand the algorithm intuitively, once you have a basic understanding of the algorithms, then you will master the underlying math behind them effortlessly and then you will learn how to implement them using TensorFlow step by step.
The book covers almost all the state of the art deep learning algorithms. First, you will get a good understanding of the fundamentals of neural networks and several variants of gradient descent algorithms. Later, you will explore RNN, Bidirectional RNN, LSTM, GRU, seq2seq, CNN, capsule nets and more. Then, you will master GAN and various types of GANs and several different autoencoders.
By the end of this book, you will be equipped with the skills you need to implement deep learning in your projects.
Get the book
Table of contents
1. Introduction to Deep Learning
- 1.1. What is Deep Learning?
- 1.2. Biological and Artifical Neurons
- 1.3. ANN and its Layers
- 1.4. Exploring Activation Functions
- 1.5. Forward Propagation in ANN
- 1.6. How does ANN learn?
- 1.7. Debugging Gradient Descent with Gradient Checking
- 1.8. Putting it all together
- 1.9. Building Neural Network from Scratch
2. Getting to Know TensorFlow
- 2.1. What is TensorFlow?
- 2.2. Understanding Computational Graphs and Sessions
- 2.3. Variables, Constants, and Placeholders
- 2.4. Introducing TensorBoard
- 2.5. Handwritten digits classification using Tensorflow
- 2.6. Visualizing Computational graph in TensorBord
- 2.7. Introducing Eager execution
- 2.8. Math operations in TensorFlow
- 2.9. Tensorflow 2.0 and Keras
- 2.10. MNIST digits classification in Tensorflow 2.0
- 2.11. Should we use Keras or TensorFlow?
3. Gradient Descent and its variants
- 3.1. Demystifying Gradient Descent
- 3.2. Performing Gradient Descent in Regression
- 3.3. Gradient Descent vs Stochastic Gradient Descent
- 3.4. Momentum based Gradient Descent
- 3.5. Adaptive methods of Gradient Descent
- 3.6. Implementing Various Gradient descent methods from Scratch
4. Generating Song lyrics with RNN
- 4.1. Hola Recurrent Neural Networks
- 4.2. Forward Propagation in RNN
- 4.3. Backpropagation through time (BPTT)
- 4.4. Deriving BPTT step by step
- 4.5. Vanishing and Exploding Gradients
- 4.6. Generating song lyrics using RNN
- 4.7. Different types of RNN architectures
5. Improvements to the RNN
- 5.1. LSTM to the Rescue
- 5.2. Understanding the LSTM cell
- 5.3. Forward propagation in LSTM
- 5.4. Backpropagation in LSTM
- 5.5. Deriving backpropagation of LSTM Step by step
- 5.6. Predicting Bitcoins price using LSTM
- 5.7. Gated Recurrent Units
- 5.8. Understanding GRU cell
- 5.9. Forward propagation in GRU cell
- 5.10. Deriving backpropagation in GRU cell
- 5.11. Implementing GRU cell in Tensorflow
- 5.12. BiDirectional RNN
- 5.13. Going Deep with Deep RNN
- 5.14. Language Translation Seq2seq models
6. Demystifying Convolutional Networks
- 6.1. What is CNN?
- 6.2. Architecture of CNN
- 6.3. Math of CNN
- 6.4. Implementing CNN in tensorflow
- 6.5. Different types of CNN architectures
- 6.6. Capsule networks
- 6.7. Building capsule networks in Tensorflow
7. Learning Text Representations
- 7.1. Understanding Word2vec Model
- 7.2. Continuous Bag of words
- 7.3. Math of CBOW
- 7.4. Skip- Gram model
- 7.5. Math of Skip-Gram
- 7.6. various training strategies
- 7.7. Building word2vec model using Gensim
- 7.8. Visualizing word embeddings in TensorBoard
- 7.9. Converting documents to vectors using doc2vec
- 7.10. Finding similar documents using Doc2vec
- 7.11. Understanding skip thoughts algorithm
- 7.12 Quick thoughts for sentence embeddings
8. Generating Images using GANs
- 8.1. Distinguishing generative and discriminative models
- 8.2. Say hello to GANs
- 8.3. Architecture of GANs
- 8.4. Demystifying GAN loss function
- 8.5. Generating images using GAN in TensorFlow
- 8.6. DCGAN - Adding convolution to the GAN
- 8.7. Implementing DCGAN to generate CIFAR images
- 8.8. Least Squares GAN
- 8.9. Building LSGAN in tensorflow
- 8.10. WGAN - GANs with Wasserstein distance
9. Learning more about GANs
- 9.1. Conditional GAN
- 9.2. Generating specific digits using CGAN
- 9.3. Understanding InfoGAN
- 9.4. Architecture of InfoGAN
- 9.5. Constructing InfoGAN in tensorflow
- 9.6. Translating images using CycleGAN
- 9.7. Converting photos to paintings using CycleGAN
- 9.8. Text to image synthesis using Stack GAN
10. Reconstructing inputs using Autoencoders
- 10.1. What is Autoencoder?
- 10.2. Understanding the architecture of autoencoders
- 10.3. Reconstructing MNIST images using autoencoders
- 10.4. Autoencoders with convolution
- 10.5. Building convolution autoencoder
- 10.6. Exploring denoising autoencoder
- 10.7. Denoising images using DAE
- 10.8. Understanding sparse autoencoders
- 10.9. Building sparse autoencoders
- 10.10. Learning to use contractive autoencoders
- 10.11. Implementing contractive autoencoders
- 10.12. Dissecting variational autoencoders
- 10.13. Generating images using VAE
11. Exploring few-shot learning algorithms
- 11.1. What is few-shot learning?
- 11.2. Understanding Siamese Networks?
- 11.3. Prototypical Networks
- 11.4. Relation Networks
- 11.5. Matching Networks
- 11.6. Architecture of Matching networks
- 11.7. What's Next?