This is the public repository for the 365 Data Science ML Algorithms Course by Ken Jee and Jeff Li. In this course, we walk you through the ins and outs of each ML Algorithm. We did not build this course ourselves. We stood on the shoulders of giants. We think its only fair to credit all the resources we used to build this course, as we could not have created this course without the help of the ML community. This course includes the following:
- Detailed explanations of each ML algorithm (listed below) with specifics on how they work, pros and cons, when to use them, and data preprocessing needed for each one.
- Two projects using all of the classification and regression algorithms with detailed instructions on parameter tuning
- Resources that we used to build the course so you have additional details on each topic
Use the discount link for our 3 course bundle (limited time 68% off!) --> The Machine Learning A-Z Bundle
Please go to Ankiweb.net to download Anki and to sign up for account. Please go here to download the flashcards for this course.
- Linear Regression, Clearly Exlplained!!! by StatQuest
- Linear Regression by Jim Frost
- 7 Classical Assumptions of Ordinary Least Squares (OLS) Linear Regression
- Gauss-Markov Theorem
- Linear Regression โ Detailed View
- Building Linear Regression (Least Squares) with Linear Algebra
- Linear Regression using Gradient Descent
- what-are-l1-l2-and-elastic-net-regularization-in-neural-networks
- When will L1 regularization work better than L2 and vice versa?
- What is the difference between L1 and L2 regularization? How does it solve the problem of overfitting? Which regularizer to use and when?
- What is elastic net regularization, and how does it solve the drawbacks of Ridge (๐ฟ2 ) and Lasso (๐ฟ1 )?
- Ridge, LASSO, and ElasticNet Regression
- The Intuitive Explanation of Logistic Regression
- StatQuest: Logistic Regression
- Logistic Regression by Andrew Ng
- Logistic Regression by Amherst College
- Intuition behind Log-loss score
- Log Loss Function by Alex Dyakonov
- Gradient Descent From Scratch by Analytics Vidhya
- Gradient descent, how neural networks learn
- Stochastic Gradient Descent, Clearly Explained!!! by Josh Starmer
- Gradient Descent Intuition โ How Machines Learn
- The Math and Intuition Behind Gradient Descent by Suraj Bansal
- Batch gradient descent versus stochastic gradient descent
- Decision Trees Explained by James Thorn
- A Guide to Decision Trees for Beginners
- Decision and Classification Trees, Clearly Explained!!! by Josh Starmer
- Information Gain and Mutual Information for Machine Learning by Jason Brownlee
- A Simple Explanation of Information Gain and Entropy by Victor Zhou
- How to program a decision tree in Python from 0
- Building Intuition for Random Forests by Rishi Sidhu
- An Introduction to Random Forest Algorithm for beginners
- Feature Importance in Random Forest
- Detailed Explanation of Random Forests Features importance Bias
- Random Forest: A Complete Guide for Machine Learning by Niklas Donges
- Random Forest Simple Explanation by Will Koehrsen
- Why Choose Random Forest and Not Decision Trees
- When to use Random Forest
- The Intuition Behind Gradient Boosting & XGBoost by Bobby Tan
- Gradient Boosting Algorithm: A Complete Guide for Beginners
- Gradient Boosting Trees vs. Random Forests
- Gradient Boosting In Classification: Not a Black Box Anymore!
- A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning
- XGBoost Paper
- A Gentle Introduction to XGBoost for Applied Machine Learning
- XGBoost A Scalable Tree Boosting System by Tianqi Chen
- CatBoost vs. LightGBM vs. XGBoost
- XGBoost, LightGBM or CatBoost โ which boosting algorithm should I use?
- KNN algorithm: Introduction to K-Nearest Neighbors Algorithm for Regression
- K-Nearest Neighbors ๐จโ๐ฉโ๐งโ๐ฆ
- Pros And Cons Of The K-Nearest Neighbors (KNN) Algorithm
- StatQuest: K-nearest neighbors, Clearly Explained
- The KNN Algorithm โ Explanation, Opportunities, Limitations
- K-Nearest Neighbors (KNN) Classification with scikit-learn
- Develop k-Nearest Neighbors in Python From Scratch
- Elbow Method for Finding the Optimal Number of Clusters in K-Means
- Intuition Behind K-Means
- k-Means Advantages and Disadvantages
- Difference between K means and Hierarchical Clustering
- Learn K-Means and Hierarchical Clustering Algorithms in 15 minutes
- Hierarchical clustering explained
- HOW THE HIERARCHICAL CLUSTERING ALGORITHM WORKS
- How to understand the drawbacks of Hierarchical Clustering?
- Choosing the right linkage method for hierarchical clustering
- Agglomerative Hierarchical Clustering
- Lecture 3: Hierarchical Methods
- Hierarchical Clustering in Python
- Support Vector Machines: An Intuitive Approach
- Support Vector Machine(SVM): A Complete guide for beginners
- Deep Dive into Support Vector Machine
- Support Vector Machines Part 1 (of 3): Main Ideas!!! by Josh Starmer
- SVM and Kernel SVM
- Kernel Functions-Introduction to SVM Kernel & Examples
- Deep Learning vs. Classical ML
- Backpropagation
- Neural Networks by Analogy with Linear Regression
- Neural Networks and Deep Learning
- Colah's Blog
- CNN's for Deep Learning
- Non-negative matrix factorization for recommendation systems
- Collaborative Filtering Example - Google
- Scikit Learn Decomposition
- Quick Intro Nonnegative Matrix Factorization
- Algorithms for Non-Negative Matrix Factorization
- Optimal number of latent factors in non-negative matrix factorization?
- How to Use Cross-Validation for Matrix Completion
- Matrix Factorization for Movie Recommendations in Python
- NMF โ A visual explainer and Python Implementation
- Recommendation System Series Part 4: The 7 Variants of Matrix Factorization For Collaborative Filtering
- Collaborative Filtering: Matrix Factorization Recommender System