/Deep-Learning-in-Hebrew

ספר מלא בעברית על למידת מכונה ולמידה עמוקה

MIT LicenseMIT

Deep-Learning-in-Hebrew

למידת מכונה ולמידה עמוקה בעברית

Add a star if the repository helped you 😊

MDLH

For any issue please contact us at Avrahamsapir1@gmail.com.

People

Authors:

Chapter Authors:

Contributors:

Citation

If you find this book useful in your research work, please consider citing:

@InProceedings{MDLH,
author = {Raviv, Avraham and Erlihson, Mike},
booktitle = {Machine and Deep learning in Hebrew},
year = {2021}
}

Table of Content

1.1 What is Machine Learning?

  • 1.1.1. The Basic Concept

  • 1.1.2. Data, Tasks and Learning

1.2. Applied Math

  • 1.2.1. Linear Algebra

  • 1.2.2. Calculus

  • 1.2.3. Probability

2.1. Supervised Learning Algorithms

  • 2.1.1. Support Vector Machines (SVM)

  • 2.1.2. Naïve Bayes

  • 2.1.3. K-nearest neighbors (K-NN)

  • 2.1.4. Quadratic\Linear Discriminant Analysis (QDA\LDA)

  • 2.1.5. Decision Trees

2.2. [Unsupervised Learning Algorithms]

  • 2.2.1. K-means

  • 2.2.2. Mixture Models

  • 2.2.3. Expectation–maximization (EM)

  • 2.2.4. Hierarchical Clustering

  • 2.2.5. Local Outlier Factor

2.3. [Dimensionally Reduction]

  • 2.3.1. Principal Components Analysis (PCA)

  • 2.3.2. t-distributed Stochastic Neighbor Embedding (t-SNE)

  • 2.3.3. Uniform Manifold Approximation and Projection (UMAP)

2.4. [Ensemble Learning]

  • 2.4.1. Introduction to Ensemble Learning

  • 2.4.2. Bagging

  • 2.4.3. Boosting

3.1. Linear Regression

  • 3.1.1. The Basic Concept

  • 3.1.2. Gradient Descent

  • 3.1.3. Regularization and Cross Validation

  • 3.1.4. Linear Regression as Classifier

3.2. Softmax Regression

  • 3.2.1. Logistic Regression

  • 3.2.2. Cross Entropy and Gradient Descent

  • 3.2.3. Optimization

  • 3.2.4. SoftMax Regression – Multiclass Logistic Regression

  • 3.2.5. SoftMax Regression as Neural Network

4.1. MLP – Multilayer Perceptrons

  • 4.1.1. From a Single Neuron to Deep Neural Network

  • 4.1.2. Activation Function

  • 4.1.3. Xor

4.2. Computational Graphs and Propagation

  • 4.2.1. Computational Graphs

  • 4.2.2. Forward and Backward propagation

  • 4.2.3. Back Propagation and Stochastic Gradient Descent

4.3. Optimization

  • 4.3.1. Data Normalization

  • 4.3.2. Weight Initialization

  • 4.3.3. Batch Normalization

  • 4.3.4. Mini Batch

  • 4.3.5. Gradient Descent Optimization Algorithms

4.4. Generalization

  • 4.4.1. Regularization

  • 4.4.2. Weight Decay

  • 4.4.3. Model Ensembles and Drop Out

  • 4.4.4. Data Augmentation

5.1. Convolutional Layers

  • 5.1.1. From Fully-Connected Layers to Convolutions

  • 5.1.2. Padding, Stride and Dilation

  • 5.1.3. Pooling

  • 5.1.4. Training

  • 5.1.5. Convolutional Neural Networks (LeNet)

5.2. CNN Architectures

  • 5.2.1. AlexNet

  • 5.2.2. VGG

  • 5.2.3. GoogleNet

  • 5.2.4. Residual Networks (ResNet)

  • 5.2.5. Densely Connected Networks (DenseNet)

  • 5.2.6. U-Net

  • 5.2.7. Transfer Learning

6.1. Sequence Models

  • 6.1.1. Recurrent Neural Networks

  • 6.1.2. Learning Parameters

6.2. RNN Architectures

  • 6.2.1. Long Short-Term Memory (LSTM)

  • 6.2.2. Gated Recurrent Units (GRU)

  • 6.2.3. Deep RNN

  • 6.2.4. Bidirectional RNN

  • 6.2.5. Sequence to Sequence Learning

7.1. Variational AutoEncoder (VAE)

  • 7.1.1. Dimensionality Reduction

  • 7.1.2. Autoencoders (AE)

  • 7.1.3. Variational AutoEncoders (VAE)

7.2. Generative Adversarial Networks (GANs)

  • 7.2.1. Generator and Discriminator

  • 7.2.2. DCGAN

  • 7.2.3. Conditional GAN (cGAN)

  • 7.2.4. Pix2Pix

  • 7.2.5. CycleGAN

  • 7.2.6. Progressively Growing (ProGAN)

  • 7.2.7. StyleGAN

  • 7.2.8. Wasserstein GAN

7.3. Auto-Regressive Generative Models

  • 7.3.1. PixelRNN

  • 7.3.2. PixelCNN

  • 7.3.3. Gated PixelCNN

  • 7.3.4. PixelCNN++

8.1. Sequence to Sequence Learning and Attention

  • 8.1.1. Attention in Seq2Seq Models

  • 8.1.2. Bahdanau Attention and Luong Attention

8.2. Transformer

  • 8.2.1. Positional Encoding

  • 8.2.2. Self-Attention Layer

  • 8.2.3. Multi Head Attention

  • 8.2.4. Transformer End to End

  • 8.2.5. Transformer Applications

9.1. Object Detection

  • 9.1.1. Introduction to Object Detection

  • 9.1.2. R-CNN

  • 9.1.3. You Only Look Once (YOLO)

  • 9.1.4. Single Shot Detector (SSD)

  • 9.1.5 Spatial Pyramid Pooling (SPP-net)

  • 9.1.6. Feature Pyramid Networks

  • 9.1.7. Deformable Convolutional Networks

  • 9.1.8. DE:TR: Object Detection with Transformers

9.2. Segmentation

  • 9.2.1. Semantic Segmentation Vs. Instance Segmentation

  • 9.2.2. SegNet neural network

  • 9.2.3. Atrous Convolutions

  • 9.2.4. Atrous Spatial Pyramidal Pooling

  • 9.2.5. Conditional Random Fields usage for improving final output

  • 9.2.6. See More Than Once -- Kernel-Sharing Atrous Convolution

9.3. Face Recognition and Pose Estimation

  • 9.3.1. Face Recognition

  • 9.3.2. Pose Estimation

9.5. Few-Shot Learning

  • 9.5.1. The Problem

  • 9.5.2 Metric Learning

  • 9.5.3. Meta-Learning (Learning-to-Learn)

  • 9.5.4. Data Augmentation

  • 9.5.5. Zero-Shot Learning

10.1. Language Models and Word Representation

  • 10.1.1. Basic Language Models

  • 10.1.2. Word Representation (Vectors) and Word Embeddings

  • 10.1.3. COntextual Embeddings

11.1. Introduction to RL

  • 11.1.1. Markov Decision Process (MDP) and RL

  • 11.1.2. Planning

  • 11.1.3. Learning Algorithms

11.2. Model Free Prediction

  • 11.2.1. Monte-Carlo (MC) Policy Evaluation

  • 11.2.2. Temporal Difference (TD) – Bootstrapping

  • 11.2.3. TD(λ)

11.3. Model Free Control

  • 11.3.1. SARSA - on-policy TD control

  • 11.3.2. Q-Learning

  • 11.3.3. Function Approximation

  • 11.3.4. Policy-Based RL

  • 11.3.5. Actor-Critic

11.4. Model Based Control

  • 11.4.1. Known Model – Dyna algorithm

  • 11.4.2. Known Model – Tree Search

  • 11.4.3. Planning for Continuous Action Space

11.5. Exploration and Exploitation

  • 11.5.1. N-armed bandits

  • 11.5.2. Full MDP

11.6. Learning From an Expert

  • 11.6.1. Imitation Learning

  • 11.6.2. Inverse RL

11.7. Partially Observed Markov Decision Process (POMDP)

12.1. Introduction to Graphs

  • 12.1.1. Represent Data as a Graph

  • 12.1.2. Tasks on Graphs

  • 12.1.3. The challenge of learning graphs


כל הזכויות שמורות Ⓒ