A collection of minimal and clean implementations of machine learning algorithms.
This project is targeting people who want to learn internals of ml algorithms or implement them from scratch.
The code is much easier to follow than the optimized libraries and easier to play with.
All algorithms are implemented in Python, using numpy, scipy and autograd.
- [Deep learning (MLP, CNN, RNN, LSTM)] (mla/neuralnet)
- [Linear regression, logistic regression] (mla/linear_models.py)
- [Random Forests] (mla/ensemble/random_forest.py)
- [SVM with kernels (Linear, Poly, RBF)] (mla/svm)
- [K-Means] (mla/kmeans.py)
- [K-nearest neighbors] (mla/knn.py)
- [Naive bayes] (mla/naive_bayes.py)
- [PCA] (mla/pca.py)
- [Factorization machines] (mla/fm.py)
- [Gradient Boosting trees (also known as GBDT, GBRT, GBM, XGBoost)] (mla/ensemble/gbm.py)
- t-SNE
- MCMC
- Word2vec
- Adaboost
- HMM
- Restricted Boltzmann machine
git clone https://github.com/rushter/MLAlgorithms
cd MLAlgorithms
pip install scipy numpy
pip install .
cd MLAlgorithms
python -m examples.linear_models
Your contributions are always welcome!