This project contains implementations and practice codes for various Machine Learning algorithms. Each algorithm is explained and demonstrated through code examples to help in understanding their functionality and use cases.
A decision tree is a non-parametric supervised learning method used for classification and regression. It splits the data into subsets based on the value of input features.
KNN is a simple, instance-based learning algorithm that assigns the class of a data point based on the majority class among its k-nearest neighbors.
Lasso and Ridge are types of regularization techniques used in linear regression to prevent overfitting by adding a penalty term to the loss function.
Logistic regression is a classification algorithm used to predict binary outcomes based on a linear combination of input features.
Multi-output models are capable of predicting multiple target variables simultaneously, often used in multi-task learning scenarios.
Multiple linear regression models the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data.
Neural networks are a series of algorithms that mimic the operations of a human brain to recognize relationships between vast amounts of data. They are used in a variety of tasks including classification and regression.
Random Forest is an ensemble learning method that operates by constructing multiple decision trees during training and outputting the mode or mean prediction of the individual trees.
SVM is a supervised learning model used for classification and regression analysis by finding the hyperplane that best separates the classes.
Simple linear regression is a linear approach to modeling the relationship between a dependent variable and one independent variable.
Cross Validation is a technique for assessing how a predictive model will generalize to an independent dataset. It is primarily used to estimate the skill of a model on unseen data.
Each algorithm is demonstrated in separate Jupyter notebooks. You can run these notebooks to see the implementations and understand the workings of each algorithm.
jupyter notebook