这个repository主要用于记录个人机器学习(包括神经网络,深度学习等)的学习笔记,以及整理的相关论文集(毕业需要~~),所有链接个人均有学习,至少我认为是靠谱的。
-
Regression
-
Linear Regression
- 《DeepLearning》5.1.4
-
Logistic Regression
- Cost Function
- Loss Function
-
Sigmoid function
-
Softmax function
-
-
Classification
- Binary Classification
-
Neural Network
- 《机器学习》第5章
- Back Propagation
- Rectified Linear Unit(ReLU)
- Activation Function
- Understanding Activation Functions in Neural Networks
- Sigmoid
- tanh
- ReLU(Leaky ReLU)
- Autoencoder
-
Evaluation
- bias
- variance
-
Regularzation
- 《DeepLearning》7
- L1 regularization
- L2 regularization(weight decay)
- Dropout regularization("Inverted dropout")
- Tricks on Machine Learning (Initialization, Regularization and Gradient Checking)
-
Recurrent Neural Netowork
- 《DeepLearning》10
- Bidirectional Recurrent Neural Network
- Recursive Neural Network
- LTSM
-
Convolutional Neural Network(卷积神经网络)
-
Supervised learning
-
Structured Data && Unstructured Data
-
《DeepLearning》5.7
-
Support Vector Machine
- Support Vector Machine (Wiki)
- 《机器学习》第6章
- Decision Boundary
- Kernel
-
Desision tree
-
-
Unsupervised learning
-
《DeepLearning》5.8
-
Clustering
- 《机器学习》第9章
- K-means
- 《DeepLearning》5.8.2
-
Dimensionality Reduction
- 《机器学习》第10章
- PCA
- 《DeepLearning》2.12 && 5.8.1
- Principal Component Analysis Problem Formulation (Coursera)
-
-
Anomaly detection
-
Collaborative filtering
-
Normalization
- Mean Normalization
- Batch Normalization
-
Optimization
-
Gradient Descent
- Stochastic Gradient Descent
- 《DeepLearning》5.9
- Batch Gradient Descent
- Mini-batch Gradient Descent(between the previous two)
- Vanishing and Exploding gradient problem
- Random Initialization
- Gradient checking
- Momentum
- RMSprop(root mean squared prop)
- Adam optimization algorithm(Adaptive moment estimation)
- Learning rate decay
- Gradient Descent, Momentum and Adam
- Stochastic Gradient Descent
-
Exponentially weighted average
- Bias correction
-
-
Online Learning
-
Artificial Data Synthesis
-
Ceiling Analysis
-
Newton's method(牛顿法)
-
Maximum Likelihood Estimation
-
Adversarial Training
-
Vectorization
-
Orthogonalization
-
Practical Methodology
-
Parameter && Hyperparameter tuning
-
Linear factor model
-
Bayes optimal error(Bayes error)
-
Error analysis
-
restricted Boltzmann Machine(RBMs)
-
Transfer learning
-
Multi-task learning
注:
《机器学习》代指周志华所著《机器学习》一书。
《DeepLearning》代指Ian Goodfellow and Yoshua Bengio and Aaron Courville所著《DeepLearning》一书。