/machine-learning

This is a machine learning and deep learning repository

Primary LanguageJupyter Notebook

machine-learning

Machine learning is a mathematical optimization routine which trains algorithms so that they have the ability to make predictions.

Linear Regression

The linear regression is a machine learning algorithm that takes linear data as input, and generates a line of best fit. This algorithm is useful for business analytics such as forecasting, projections, and simple data analysis.

Honey Production takes current production trends in beekeeping, and predicts future honey output.

Multiple Linear Regression

The multiple linear regression is a machine learning algorithm that computes a line of best fit by regressing a function of multiple variables. Example use cases of this algorithm involve maximizing real estate returns given a rental property's many features, or predicting other multiple variable outcomes, such as manufacturing output.

Rental Prices performs miltivariable calculus to predict rental prices for apartments in New York City with 82% accuracy.

Logistic Regression

The logistic regression is a machine learning algorithm which computes probabilities dependent upon specific features. For example, this algorithm could predict which students pass an exam, or what survival odds a person has in a natural disaster.

Natural Disaster isolates important features related to a natural disaster to predict that a middle class man in his 20s had only a 10% chance of surviving the Titanic.

Naive Bayes

The naive bayes is a machine learning and natural language processing algorithm which classifies data, frequently in the form of text. This algorithm uses probabilities to analyze customer sentiment, or expectations of a particular user base. Also, do to the fact that language is probabilistic, it powers many "AI" applications, such as assistants and chat bots.

Text Classifier is a naive bayes classifier that determines the subject of an email using its word content with 99% accuracy.

K-Nearest Neighbors

The k-nearest neighbors is a machine learning algorithm which classifies an unknown datapoint by polling the labels of its neighboring data, and rendering a verdict by majority. It is among the most powerful machine learning algorithms for future industries, deployed in recommendation engines, and genetic sequencing technologies.

MRI Scans is a project I coded using a k-nearest neighbors algorithm, which interpreted MRI scans to isolate and classify cancerous tumours with 96% accuracy.

Decision Tree

The decision tree is a machine learning algorithm which iterates through the features of a new datapoint in order to classify it. An example use case of this algorithm is classifying complex objects, such as in a sorting, or quality assurance application.

National Flag interprets feature data of a nation's flag, and with 55% accuracy predicts which continent that nation is located.

Random Forest

The random forest is an ensemble learning algorithm that generates multiple decision trees, and polls their classifications to render a verdict by majority. It is considered a more robust approach to classification as it incorporates randomization techniques, and also computes feature importance.

Income Predictor is a random forest algorithm that predicts which Americans make over $50,000 year with 82% accuracy.

Perceptron

The perceptron is a machine learning algorithm that classifies linearly seperable data. An ensemble of these algorithms constitute the building blocks of neural networks.

AND Gate is a perceptron I developed which isolates the decision boundary inside a truth table to function as an AND logic gate.

KMeans Clustering

The kmeans clustering is a machine learning algorithm which performs the k-nearest neighbors classification without human in-the-loop supervision. Typically for a desired outcome, labels will then be provided. This algorithm is capable of use cases including computer vision, or an otherwise unstructured k-nearest neighbors application.

Digits interpreted my hand written numerical input and named the digits with varying accuracy.

Iris classifies subspecies of flora using only their dimensions with 92% accuracy.

deep-learning

A neural network is a nested vector function, and when these networks consist of more than two layers, they constitute a deep learning model.

Multilayer Perceptron

The multilayer perceptron approximates relationships between input and output pairs to acquire predictive capabilites. Due to its multiple layers, the network develops non-linear properties which then develop complex decision boundaries, and the ability to perform a variety of classification and regression tasks.

Digits is a deep learning model I developed with Keras and TensorFlow, which classifies numerical integers 0 - 10 with 97% accuracy.

Convolutional Neural Network

The convolution is a matrix operation which discretizes input data and compares it to a reference, thereby making a classification. The technique was invented to work with image data, and its most common use cases involve computer vision and image labeling.

Cats and Dogs is a computer vision model built with OpenCV, Keras, and TensorFlow, which classifies cats and dogs with 77% accuracy.