With this collection of notebooks, I would like to share my experience of learning and researching machine learning techniques. To this end, on each notebook, the reader is firstly provided with a brief mathematical background knowledge. Therafter, I provided a naive Python implementation of the respective technique. During the implementation, I focused on the readability.
Recreating techniques via implementing them from scratch necessitates understanding atomic details of the respective techniques. Listen Yann LeCun, Yoshua Bengio and Andrej Karpathy. They can not be all wrong, can they ? :)
Feel free to use/modify any code provided in this repository. However, I have a request from you. Please do not forget that “No one has ever become poor by giving.” – Anne Frank. Please make a small donation to World Food Programme.
Prerequisite: Linear Algebra
- Machine Learning
- Naive Bayes
- Regression
- Maximum Likelihood vs Maximum A Posteriori Estimation
- Support Vector Machines
- Loss Funciton Langscape
- Kernalization
- Generative vs Discriminator
- Gaussian Process
- Bayesian Optimization
- Bagging
- From Decision Tree to Random Forest
- Boosting
- Deep Learning
- Linear Classification
- Vanilla Nets
- Convolutions
- Forward and Backward Passes in Nets
- Optimization as API
- Vanishing Gradient
- Batch Normalization
- Dropout
- Recurrent Nets
- LSTM
- Generative Adversarial Network
- Graph Convolutional Networks
- Laplace Redux
- Reinforcement Learning
- Search
- MDP
- RL
- Deep Q-Network
- Numerical Optimization
- Descent them all:
- Gradient Descent
- Stochastic Gradient Descent
- Momentum
- Nesterov's Momentum
- Descent them all: