Howto pandoc markdown with latex contents in github project, especially serving it as README.md
-
Write README.pandoc.md
-
Publish as README.md (github flavored markdown + webtex)
-
Use following command to convert (incorporate to build script or commit hook will be nice)
pandoc -t markdown_github --webtex README.pandoc.md -o README.md
test document follows
Introduction
- What is Machine Learning?
- Supervised Learning
- Unsupervised Learning
Linear Regression with One Variable
- Model (Hypothesis)
- Cost Function
- Gradient Descent
Linear Regression with Multiple Variable
- Multiple Features
- Feature Scaling
- Learning Rate
- Features and Polymomial Regression
- Computing Parameters Analytically - Normal Equation
Logistic Regression
- Classification
- Logistic Hypothesis
- Decision Boundary
- Cost Function
- Simplified Cost Function and Gradient Descent
- Advanced Optimization
- Multiclass Classification - One-vs-all
Regularization
- The Problem of Overfitting
- Cost Function
- Regularized Linear Regression
Neural Networks: Representation
- Non-linear Hypothesis
- Neurons and the Brain
- Model Representation
- Example - AND, ! AND !, OR -> XOR
- Example - Multclass Classification
Neural Networks: Learning
- Cost Function - NN multiclass classification
- Backpropagation
Backpropagation in Practice
- Unrolling Parameters
- Gradient Checking
- Random Initialization
- Putting It Together
Advice for Applying Machine Learning
- Not-satisfiying result! -> What to Try Next?
Training set / Cross Validation set / Test set
- Evaluating a hypothesis with a separate test set
- Check overfit, generalization
- train:test = 70%:30%
- Model selection with another separate cross validation set
- Number of parameters() and Bias/Variance
- Regualization and Bias/Variance
- Learning Curves, Error x training set size
- Summary
Machine Learning System Design (with Spam classifier)
- Prioritizing What to Work On
- Error Analysis
- Error Metrics Skewed Classes
- Precision and Recall trade off
- Support Vector Machine
Unsupervised Learning
- Clustering
- K-Means Algorithm
- Optimization Objective
- Random Initialization
- Choosing the Number of Clusters
Dimensionality Reduction
-
Motivation
-
Motivation I: Data Compression
-
Motivation II: Visualization
-
Principal Component Analysis
-
Principal Component Analysis Problem Formulation
-
Principal Component Analysis Problem Algorithm
-
Applying PCA
-
Reconstructin from Compressed Representation
-
Choosing the Number of Principal Components
-
Advice for Applying PCA
Anomaly Detection
Recommender Systems
Large Scale Machine Learning
-
Gradient Descent with Large Datasets
-
Learning with Large Datasets
-
Stochastic Gradient Descent
-
Mini-Batch Gradient Descent
-
Stochastic Gradient Descent Convergence
-
Advanced Topics
-
Online Learning
-
Map Reduce and Data Parallelism
Application Example: Photo OCR
- Photo OCR
- Problem Description and Pipeline
- Sliding Windows
- Getting Lots of Data and Artificial Data
- Ceiling Analysis: What Part of the Pipeline to Work on Next
Conclusion
- Summary and Thank You