🗣 Hacktoberfest encourages participation in the open source community, which grows bigger every year. Complete the 2021 challenge and earn a limited edition T-shirt.
📢 Register here for Hacktoberfest and make four pull requests (PRs) between October 1st-31st to grab free SWAGS 🔥.
- Fork this repository. If already fork, plese considering fetch upstream before make PR.
- Select one topic that added in readme.md Tables of content and make sure no ane add any issue for that topic or no one already contribute to that topic.
- Create new issue with what topic you choosed. If there already issues, comment that issue you are taking.
- go to reposiotory that you forked in your account, then edit readme.md file and add verified details to topic you choose. Please follow of presented table format.
- Commit changes with appropriate topic and description what you have done.
- Make PR( Pull Request ) to original repository with Dev branch.
- That's all and, after accepting your PR, your are eligible with one PR for the hacktoberfest 2021.
You can find information from this link or on yourself.
Keep Connect With Me Deshitha Hansajith LinkedIn
Please read the Guidelines for contributors
- Machine Learning
- Supervised Machine Learning
- Regression
- Linear Regression
- Poisson Regression
- Support Vector Regression
- Classificatin
- Logistic Regression
- Neural Network
- Decision Tree
- Naive Bayes Classifier
- Regression
- Unsupervised Machine Learning
- Clustering
- High Dimension Data
- Generative Models
- Reinforcement Machine Learning
- Semi-supervised Machine Learning
- Supervised Machine Learning
- Deep Learning
- Feed Forward Neural Network
- Radial Basis Function Neural Network
- Multi-layer Perceptron
- Convolutinal Neural Network
- Recurrent Neural Network
- Modular Neural Network
- Sequence to Sequence Model
- Neural Networks
- Feed Forward Neural Network
- Radial Basis Function Neural Network
- Multi-layer Perceptron
- Convolutinal Neural Network
- Recurrent Neural Network
- Modular Neural Network
Topic | Description | Example / Tutorial (link) |
---|---|---|
Supervised Machine Learning | It is known as supervised learning when we train the algorithm by explicitly supplying the labels. The supplied dataset is used to train the model in this sort of method. The model is shaped like this. Y=f(X) where x is the input variable, y is the output variable, and f(X) is the hypothesis. |
Tutorials Point |
Regression | The output variable in regression is numerical(continuous), which means we train the hypothesis(f(x)) to produce continuous output(y) given the input data (x). The regression approach is employed in the prediction of quantities, sizes, values, and other things since the output is influenced by the real number. | projecctpro |
Linear Regression | The most basic type of regression method. In this case, we have two variables: one independent, which is the predicted output, and one dependent, which is the feature. The relationship between these two variables is believed to be linear, which means that a straight line may be used to divide them. The goal of this function is to find the line that divides these two variables with as little error as feasible. The error is calculated as the total of the Euclidean distances between the points and the line.When there is just one independent variable, it is referred to as simple linear regression and is denoted by:Y = b0+b1x1+c Multiple linear regression is used when there is more than one independent variable. and is provided by: Y =bo+b1x1+b2x2+b3x3... In all equations above, y is the dependent variable and x is the independent variable. |
Try out this tutorial |
Poisson Regression | It is based on the Poisson distribution, in which the dependent variable(y) has a value of a tiny, non-negative integer such as 0,1,2,3,4, and so on.Assuming that a big count will not occur on a regular basis. Poisson regression is similar to logistic regression in this respect, except the dependent variable is not restricted to a single value. | Try out this tutorial |
Support Vector Regression | As the name implies, Support Vector Regression is a regression algorithm that supports both linear and non-linear regressions. This approach is based on the Support Vector Machine idea. SVR varies from SVM in that SVM is a classifier that predicts discrete categorical labels, whereas SVR is a regressor that predicts continuous ordered variables.The goal of basic regression is to decrease the error rate, however the goal of SVR is to fit the error within a particular threshold, which means that the task of SVR is to approximate the best value within a given margin termed ε- tube. | Youtube Tutorial for Support Vector Machine |
Classificatin | The output variable in classification is discrete. To put it another way, we train the hypothesis(f(x)) to produce discrete output(y) for the input data (x). A class can also be used to describe the output. Using the previous example of home pricing, instead of finding the precise amount, we can use classification to forecast whether the house price will be above or below. As a result, we have two classes: one for when the price is above and one for when the price is below. Classification is used in speech recognition, image classification, NLP, etc. | Calssification Tuutorial in IBM deveoper site |
Logistic Regression | It is a categorization algorithm of some sort. It is used to calculate the discrete value given the independent variables. It aids in determining the likelihood of occurrence of a function by employing a logit function. The hypothesis of these approaches' output(y) ranges from 0 to 1.A logistic regression function is given by:p=1/(1+e^-y) where y is the equation on line. The value is scaled between 0 and 1 as a result of this function. A sigmoid function is another name for this function. |
YouTube tutorial for Logistic Regression |
Decision Tree | The decision tree creates classification or regression models as tree structures. It subdivides the dataset and assigns a judgment to it. We obtain a tree with decision and leaf nodes. One or more decision nodes lead to leaf nodes. A leaf node represents a categorization or choice. It approximates the outcome using the if-then-else rule. The more complicated the rules, the better the model. | Documenatation tutorial-hackerearth.com |
Naïve Bayes Classifier | When it comes to classification tasks, a Naïve Bayes classifier is a probabilistic machine learning model that is used to make predictions. The Bayes theorem is at the heart of the classifier, and that is what it does. Mathematical function is P(A|B) = P(B|A)P(A) / P(B) |
Documentaion Tutorial-towardsdatascience.com |
Semi-supervised Machine Learning | Semi-supervised learning is a subset of supervised learning tasks and approaches that make use of unlabeled data in addition to labeled data for training — often a small quantity of labeled data combined with a big amount of unlabeled data. Semi-supervised learning is a middle ground between unsupervised and supervised learning. | (Tutorial on Semi-Supervised Learning(PDF))[http://pages.cs.wisc.edu/~jerryzhu/pub/sslchicago09.pdf] |
Unsupervised Machine Learning | Unsupervised machine learning uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention. Its ability to discover similarities and differences in information make it the ideal solution for exploratory data analysis, cross-selling strategies, customer segmentation, and image recognition. | Learn further on Unspervised Machine Learning |
Clustering | Clustering is an unsupervised machine learning task. It is often used as a data analysis technique for discovering interesting patterns in data, such as groups of customers based on their behavior.It involves automatically discovering natural grouping in data. | Tutorial and Example for clustering |
High Dimension Data | High Dimensional means that the number of dimensions are staggeringly high — so high that calculations become extremely difficult. With high dimensional data, the number of features can exceed the number of observations. | Dimensionality & High Dimensional Data |
Generative Models | Generative modeling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset | A Gentle Introduction to Generative Adversarial Networks (GANs) |
Reinforcement Machine Learning | Reinforcement Learning is a feedback-based Machine learning technique in which an agent learns to behave in an environment by performing the actions and seeing the results of actions. | [Reinforcement Learning Tutorial] (https://www.javatpoint.com/reinforcement-learning) |
Topic | Description | Example / Tutorial (link) |
---|---|---|
Multi layer perceptron (MLP) | MLP is a complement to feed forward neural network. As shown in Fig. 3, it has three layers: input, output, and hidden. The input layer receives the processed signal. The output layer performs tasks like prediction and categorization. The MLP's true computational engine is an arbitrary number of hidden layers inserted between the input and output layers. A MLP is a feed forward network with data flowing from input to output layer. The MLP's neurons are taught using back propagation learning. MLPs may estimate any continuous function and handle nonlinear problems. MLP is used for pattern recognition, prediction, and approximation. | How MLP works |
Modular Neural Network (MNN) | Modular Neural Networks are use of a number of Neural Networks for problem solving. Here the various neural networks behave as modules to solve a part of the problem. The task of division of problem into the various modules as well as the integration of the responses of the modules to generate the final output is done by an integrator. | Get better understanding on MNN |
Recurrent Neural Network | Recurrent neural networks (RNN) are a class of neural networks that are helpful in modeling sequence data. Derived from feedforward networks, RNNs exhibit similar behavior to how human brains function. Simply put: recurrent neural networks produce predictive results in sequential data that other algorithms can’t. | More about RNN |
Sequence to Sequence Model | Sequence-to-sequence learning (Seq2Seq) is the process of developing models that transform sequences from one domain to another using data from other domains. This may be used for machine translation or for question answering without the need for a computer. As a general principle, it may be used whenever you need to create text. There are a variety of approaches that may be used to do this job, including RNNs and 1D convnets. | Link to refer about Sequence to Sequence Model |
Feedforward Neural Networks | A Feed Forward Neural Network is an artificial neural network in which the connections between nodes does not form a cycle. The opposite of a feed forward neural network is a recurrent neural network, in which certain pathways are cycled. The feed forward model is the simplest form of neural network as information is only processed in one direction. While the data may pass through multiple hidden nodes, it always moves in one direction and never backwards. | Understanding Feedforward Neural Networks |
Sequence to Sequence Model | Deep Learning at scale is disrupting numerous sectors by enabling the creation of never-before-seen chatbots and bots. A person just getting started with Deep Learning, on the other hand, might learn about the Basics of Neural Networks and their many designs, such as CNN and RNN. | |
However, it appears that there is a significant leap from simple notions to Deep Learning applications in industry. In order to construct deep learning applications, understanding concepts like batch normalization, dropout, and attention is nearly a prerequisite. | ||
In this post, we'll go over two key ideas that are employed in today's state-of-the-art Speech Recognition and Natural Language Processing applications: Sequence to Sequence modeling and Attention models.To give you a taste of what these two approaches may be used for, Baidu's AI system utilizes them to clone your voice. In just three seconds of training, it can mimic a person's voice by learning his voice. You may listen to audio samples of both original and artificial voices given by Baidu's Research team. | ||
Note: This post assumes you're familiar with the fundamentals of deep learning and have developed RNN models. You can start with these articles if you need a refresher:. | Essentials of Deep Learning – Sequence to Sequence modelling with Attention (using python) |
Topic | Description | Example / Tutorial (link) |
---|---|---|
Radial Basis Function Neural Network | Radial basis function (RBF) networks are a commonly used type of artificial neural network for function approximation problems. Radial basis function networks are distinguished from other neural networks due to their universal approximation and faster learning speed. An RBF network is a type of feed forward neural network composed of three layers, namely the input layer, the hidden layer and the output layer. | Introduction to Radial Basis Function Neural Network |
Feed Forward Neural Network | MLPs, or multilayer perceptrons, are the quintessential deep learning models. A feedforward network's purpose is to approximate f*. So, a classifier's output is y = f*(x). A feedforward network learns the parameters that result in the best function approximation. | Introduction for Feed Forward Neural Network |
Convolutinal Neural Network | A convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data. CNNs are powerful image processing, artificial intelligence (AI) that use deep learning to perform both generative and descriptive tasks, often using machine vison that includes image and video recognition, along with recommender systems and natural language processing (NLP). | Convolutional Neural Network Tutorial |
Recurrent Neural Network | Apple's Siri and Google's voice search both use recurrent neural networks (RNNs), which are the state-of-the-art algorithm for sequential data. It is the first algorithm with an internal memory that remembers its input, making it ideal for machine learning problems involving sequential data. It's one of the algorithms that's been at the heart of deep learning's incredible progress in recent years. | Recurrent Neural Networks cheatsheet |
Multi-layer Perceptron | In a word, a multilayer perceptron (MLP) is a fancy way of saying "feedforward neural network," which is the more popular term. If a model has more than one layer, it is said to be multilayer since it has several hidden and output components. A binary classification technique known as a "Perceptron." For non-linearly separable classes, a single perceptron, as a linear classifier, cannot reliably predict on its own. (Consider the following: picture the following: attempt to separate this circle from the square using nothing but a straight line. Due to the MLP's numerous layers (where each node is a perceptron) and non-linear activation functions (which serve as gates between said nodes), it is able to deal with classification situations when classes are not linearly separable. Node weights are adjusted using a process known as Backpropagation (derivates of the loss function trickling back to adjust node weights through the chain rule), which is based on an iterative optimization process known as Gradient Descent (gradual adjustments leading to a local—hopefully global) optimal). | Multi-Layer Perceptron Learning |