- Naive Bayes classifier for categorical data from scratch in Python
- Naive Bayes classifier for continuous data from scratch in Python
- Data Visualization: Showing Iris dataset with Blender API
- Norms in vector space: A review of norms, and reminding p-norms are included. Finally, we compare some special p-norms.
- Inner products in vector space: Reminding dot product and Frobenius inner product, and then canonical norms based on them. There are examples with module numpy.
- Gram-Schmidt process: An algorithm to convert a linearly independent set of vectors into an orthogonal set of vectors.
- Boxplot: The elements of a boxplot are reviewed here, including: medians, quartiles, fences, and outliers.
- Probability, standard terms: such as sample space, trial, outcome, and event.
- Logisitic function: It is an S-shaped curve, which is widely used in machine learning and neural networks.
- Sigmoid functions (curves): Some examples are included. They are widely used in neural networks and deep learning.
- Conditional probability: We review the conditional probability and based on it, we get the multiplication rule.
- Inclusion-exclusion principle: We review this principle both in set theory and in probability. Python code is also provided.
- Probability, independent events: The property of independent events are mentioned here. Also, multiplication rule is included with some examples.
- Probability, Bayes' rule: The Bayes' rule is expressed here along the total probability theorem. Bayes' rule is defined by conditional probabilities. Some Python code are included too.
- Linear Regression with Least Squares: When we assume the data points are related through a linear function, we can predict the dependent variable from independent variabe(s). This is a lienar regression. One way to find the parameters of a linear regression is to use a Least Squares estimator. The related Python code clarifies this topic.
- Ridge Regression with Least Squares: Ridge regression is an extension of linear regression in which a penalty term is included in the loss function. This penalty term is called regularization term. Ridge regression is especially useful when data points are noisy and/or having outliers. It also shows robustness against overfitting.
- Gradient Descent for Linear and Ridge Regression: This time we are going to use the Gradient Descent method for finding the minimum of loss function of ridge regression and linear regression. For a deeper look at the Gradient Descent (GD), see our repository for Optimization.
- Gradient and tangent planes: For a surface in the form of f(x,y,z)=constant, its gradient vector is orthogonal to the surface at that point. With this property, we can get the equation for tangent planes to a surface or a level curve. It is reminded that a tangent plane is a linear approximation to the given function.
ostad-ai/Machine-Learning
This repository contains topics and codes related to Machine Learning and Data Science, especially in Python
Jupyter Notebook