/Machine-Learning_Stanford-University

Machine Learning course offered by Stanford University at Coursera | Instructor: Andrew Ng

MIT LicenseMIT

~~~~~~~~~~~~ Machine Learning Course ~~~~~~~~~

PROJECT IN PROGRESS

Syllabus


  • Introduction

    • What is Machine Learning?
    • Supervised Learning
    • Unsupervised Learning

    [1 practice exercise]

  • Linear Regression with One Variable

    • Model Representation
    • Cost Function
    • Gradient Descent
    • Gradient Descent For Linear Regression
    1. [1 practice exercise]
    2. [Linear Regression I using Python]
    3. [Cost Function using Python]
    4. [Gradient Descent using Python]
  • Linear Algebra Review

    • Matrices and Vectors
    • Addition and Scalar Multiplication
    • Matrix Vector Multiplication
    • Matrix Matrix Multiplication
    • Matrix Multiplication Properties
    • Inverse and Transpose
    1. [1 practice exercise]
    2. [Python Programming]

  • Linear Regression with Multiple Variables

    • Multiple Features
    • Gradient Descent for Multiple Variables
    • Gradient Descent in Practice I - Feature Scaling
    • Gradient Descent in Practice II - Learning Rate
    • Features and Polynomial Regression
    • Normal Equation
    • Normal Equation Noninvertibility
    • Working on and Submitting Programming Assignments

    [1 practice exercise]

  • Octave/Matlab Tutorial

    • Basic Operations
    • Moving Data Around
    • Computing on Data
    • Plotting Data
    • Control Statements: for, while, if statement
    • Vectorization

    [1 practice exercise]


  • Logistic Regression

    • Classification
    • Hypothesis Representation
    • Decision Boundary
    • Cost Function
    • Simplified Cost Function and Gradient Descent
    • Advanced Optimization
    • Multiclass Classification: One-vs-all

    [1 practice exercise]

  • Regularization

    • The Problem of Overfitting
    • Cost Function
    • Regularized Linear Regression
    • Regularized Logistic Regression

    [1 practice exercise]


  • Neural Networks: Representation

    • Non-linear Hypotheses
    • Neurons and the Brain
    • Model Representation I
    • Model Representation II
    • Examples and Intuitions I
    • Examples and Intuitions II
    • Multiclass Classification

    [1 practice exercise]


  • Neural Networks: Learning

    • Cost Function
    • Backpropagation Algorithm
    • Backpropagation Intuition
    • Implementation Note: Unrolling Parameter
    • Gradient Checking
    • Random Initialization
    • Putting It Together
    • Autonomous Driving

    [1 practice exercise]


  • Advice for Applying Machine Learning

    • Deciding What to Try Next
    • Evaluating a Hypothesis
    • Model Selection and Train/Validation/Test Sets
    • Diagnosing Bias vs. Variance
    • Regularization and Bias/Variance
    • Learning Curves
    • Deciding What to Do Next Revisited

    [1 practice exercise]

  • Machine Learning System Design

    • Prioritizing What to Work On
    • Error Analysis
    • Error Metrics for Skewed Classes
    • Trading Off Precision and Recall
    • Data For Machine Learning

    [1 practice exercise]


  • Support Vector Machines

    • Optimization Objective
    • Large Margin Intuition
    • Mathematics Behind Large Margin Classification
    • Kernels I
    • Kernels II
    • Using An SVM

    [1 practice exercise]


  • Unsupervised Learning

    • Unsupervised Learning: Introduction
    • K-Means Algorithm
    • Optimization Objective
    • Random Initialization
    • Choosing the Number of Clusters

    [1 practice exercise]

  • Dimensionality Reduction

    • Motivation I: Data Compression
    • Motivation II: Visualization
    • Principal Component Analysis Problem Formulation
    • Principal Component Analysis Algorithm
    • Reconstruction from Compressed Representation
    • Choosing the Number of Principal Components
    • Advice for Applying PCA

    [1 practice exercise]


  • Anomaly Detection

    • Problem Motivation
    • Gaussian Distribution
    • Algorithm
    • Developing and Evaluating an Anomaly Detection System
    • Anomaly Detection vs. Supervised Learning
    • Choosing What Features to Use
    • Multivariate Gaussian Distribution
    • Anomaly Detection using the Multivariate Gaussian Distribution

    [1 practice exercise]

  • Recommender Systems

    • Problem Formulation
    • Content Based Recommendations
    • Collaborative Filtering
    • Collaborative Filtering Algorithm
    • Vectorization: Low Rank Matrix Factorization
    • Implementational Detail: Mean Normalization

    [1 practice exercise]


  • Large Scale Machine Learning

    • Learning With Large Datasets
    • Stochastic Gradient Descent
    • Mini-Batch Gradient Descent
    • Stochastic Gradient Descent Convergence
    • Online Learning
    • Map Reduce and Data Parallelism

    [1 practice exercise]


  • Application Example: Photo OCR

    • Problem Description and Pipeline
    • Sliding Windows
    • Getting Lots of Data and Artificial Data
    • Ceiling Analysis: What Part of the Pipeline to Work on Next
    • Summary and Thank You

    [1 practice exercise]

Referências