Supervised Machine Learning |
It is known as supervised learning when we train the algorithm by explicitly supplying the labels. The supplied dataset is used to train the model in this sort of method. The model is shaped like this. Y=f(X) where x is the input variable, y is the output variable, and f(X) is the hypothesis. |
Tutorials Point |
Regression |
The output variable in regression is numerical(continuous), which means we train the hypothesis(f(x)) to produce continuous output(y) given the input data (x). The regression approach is employed in the prediction of quantities, sizes, values, and other things since the output is influenced by the real number. |
projecctpro |
Linear Regression |
The most basic type of regression method. In this case, we have two variables: one independent, which is the predicted output, and one dependent, which is the feature. The relationship between these two variables is believed to be linear, which means that a straight line may be used to divide them. The goal of this function is to find the line that divides these two variables with as little error as feasible. The error is calculated as the total of the Euclidean distances between the points and the line.When there is just one independent variable, it is referred to as simple linear regression and is denoted by:Y = b0+b1x1+c Multiple linear regression is used when there is more than one independent variable. and is provided by: Y =bo+b1x1+b2x2+b3x3... In all equations above, y is the dependent variable and x is the independent variable. |
Try out this tutorial |
Poisson Regression |
It is based on the Poisson distribution, in which the dependent variable(y) has a value of a tiny, non-negative integer such as 0,1,2,3,4, and so on.Assuming that a big count will not occur on a regular basis. Poisson regression is similar to logistic regression in this respect, except the dependent variable is not restricted to a single value. |
Try out this tutorial |
Support Vector Regression |
As the name implies, Support Vector Regression is a regression algorithm that supports both linear and non-linear regressions. This approach is based on the Support Vector Machine idea. SVR varies from SVM in that SVM is a classifier that predicts discrete categorical labels, whereas SVR is a regressor that predicts continuous ordered variables.The goal of basic regression is to decrease the error rate, however the goal of SVR is to fit the error within a particular threshold, which means that the task of SVR is to approximate the best value within a given margin termed ε- tube. |
Youtube Tutorial for Support Vector Machine |
Classificatin |
The output variable in classification is discrete. To put it another way, we train the hypothesis(f(x)) to produce discrete output(y) for the input data (x). A class can also be used to describe the output. Using the previous example of home pricing, instead of finding the precise amount, we can use classification to forecast whether the house price will be above or below. As a result, we have two classes: one for when the price is above and one for when the price is below. Classification is used in speech recognition, image classification, NLP, etc. |
Calssification Tuutorial in IBM deveoper site |
Logistic Regression |
It is a categorization algorithm of some sort. It is used to calculate the discrete value given the independent variables. It aids in determining the likelihood of occurrence of a function by employing a logit function. The hypothesis of these approaches' output(y) ranges from 0 to 1.A logistic regression function is given by:p=1/(1+e^-y) where y is the equation on line. The value is scaled between 0 and 1 as a result of this function. A sigmoid function is another name for this function. |
YouTube tutorial for Logistic Regression |
Decision Tree |
The decision tree creates classification or regression models as tree structures. It subdivides the dataset and assigns a judgment to it. We obtain a tree with decision and leaf nodes. One or more decision nodes lead to leaf nodes. A leaf node represents a categorization or choice. It approximates the outcome using the if-then-else rule. The more complicated the rules, the better the model. |
Documenatation tutorial-hackerearth.com |
Naïve Bayes Classifier |
When it comes to classification tasks, a Naïve Bayes classifier is a probabilistic machine learning model that is used to make predictions. The Bayes theorem is at the heart of the classifier, and that is what it does. Mathematical function is `P(A |
B) = P(B |