/ecornell_ML

Primary LanguageJupyter Notebook

Cornell Certificate Program - Machine Learning

Course Website

For more information please visit eCcornell website.

Contributing

Pull requests are welcome if there's an error in the solution or wrong answers are provided.

For Educational Purpose Only. It's better if you complete the assignments by yourself and only refer to this if you are stuck in the middle.

Overview

Machine learning is emerging as today’s fastest-growing job as the role of automation and AI expands in every industry and function.

Cornell’s Machine Learning certificate program equips you to implement machine learning algorithms using Python. Using a combination of math and intuition, you will practice framing machine learning problems and construct a mental model to understand how data scientists approach these problems programmatically. Through investigation and implementation of k-nearest neighbors, naive Bayes, regression trees, and others, you’ll explore a variety of machine learning algorithms and practice selecting the best model, considering key principles of how to implement those models effectively. You will also have an opportunity to implement algorithms on live data while practicing debugging and improving models through approaches such as ensemble methods and support vector machines. Finally, the coursework will explore the inner workings of neural networks and how to construct and adapt neural networks for various types of data.

Machine learning is complex. While you do not need to have machine learning experience in order to take the program, we strongly recommend having prior experience in math, including familiarity with Python, probability theory, statistics, multivariate calculus and linear algebra. This program uses Python and the NumPy library for code exercises and projects. Projects will be completed using Jupyter Notebooks.

This certificate program includes two self-paced lessons covering the linear algebra computations used in the Machine Learning curriculum. You may refer to these lessons at any time before or during your Machine Learning program.

Courses

CIS531 - Problem-Solving with Machine Learning

This course begins by helping you reframe real-world problems in terms of supervised machine learning. Through understanding the “ingredients” of a machine learning problem, you will investigate how to implement, evaluate, and improve machine learning algorithms. Ultimately, you will implement the k-Nearest Neighbors (k-NN) algorithm to build a face recognition system. Tools like the NumPy Python library are introduced to assist in simplifying and improving Python code.

CIS532 - Estimating Probability Distributions

In this course, you will use the Maximum Likelihood Estimate (MLE) to approximate distributions from data. Using the Bayes Optimal Classifier, you will learn how the assumptions you make will impact your estimations. You will then learn to apply the Naive Bayes Assumption to estimate probabilities for problems that contain a high number of dimensions. Ultimately, you will apply this understanding to implement the Naive Bayes Classifier in order to build a name classification system.

CIS533 - Learning with Linear Classifiers

In this course, you are introduced to and implement the Perceptron algorithm, a linear classifier that was developed at Cornell in 1957. Through the exploration of linear and logistic regression, you will learn to estimate probabilities that remain true to the problem settings. By using gradient descent, we minimize loss functions. Ultimately, you will apply these skills to build a email spam classifier.

CIS534 - Decision Trees and Model Selection

In this course, you will be introduced to the classification and regression trees (CART) algorithm. By implementing CART, you will build decision trees for a supervised classification problem. Next, you will explore how the hyperparameters of an algorithm can be adjusted and what impact they have on the accuracy of a predictive model. Through this exploration, you will practice selecting an appropriate model for a problem and dataset. You will then load a live dataset, select a model, and train a classifier to make predictions on that data.

CIS535 - Debugging and Improving Machine Learning Models

In this course, you will investigate the underlying mechanics of a machine learning algorithm’s prediction accuracy by exploring the bias variance trade-off. You will identify the causes of prediction error by recognizing high bias and variance while learning techniques to reduce the negative impacts these errors have on learning models. Working with ensemble methods, you will implement techniques that improve the results of your predictive models, creating more reliable and efficient algorithms.

CIS536 - Learning with Kernel Machines

In this course, you will explore support-vector machines and use them to find a maximum margin classifier. You will then construct a mental model for how loss functions and regularizers are used to minimize risk and improve generalization of a learning model. Through the use of feature expansion, you will extend the capabilities of linear classifiers to find non-linear classification boundaries. Finally, you will employ kernel machines to train algorithms that can learn in infinite dimensional feature spaces.

CIS537 - Deep Learning and Neural Networks

In this course, you will investigate the fundamental components of machine learning that are used to build a neural network. You will then construct a neural network and train it on a simple data set to make predictions on new data. We then look at how a neural network can be adapted for image data by exploring convolutional networks. You will have the opportunity to explore a simple implementation of a convolutional neural network written in PyTorch, a deep learning platform. Finally, you will yet again adapt neural networks, this time for sequential data. Using a deep averaging network, you will implement a neural sequence model that analyzes product reviews to determine consumer sentiment.