At DAIR.AI we ❤️ open AI education. In this repo, we index and organize some of the best and most recent machine learning courses available on YouTube.
Machine Learning
- Caltech CS156: Learning from Data
- Stanford CS229: Machine Learning
- Making Friends with Machine Learning
- Applied Machine Learning
- Introduction to Machine Learning (Tübingen)
- Machine Learning Lecture (Stefan Harmeling)
- Statistical Machine Learning (Tübingen)
- Probabilistic Machine Learning
- MIT 6.S897: Machine Learning for Healthcare (2019)
Deep Learning
- Neural Networks: Zero to Hero
- MIT: Deep Learning for Art, Aesthetics, and Creativity
- Stanford CS230: Deep Learning (2018)
- Introduction to Deep Learning (MIT)
- CMU Introduction to Deep Learning (11-785)
- Deep Learning: CS 182
- Deep Unsupervised Learning
- NYU Deep Learning SP21
- Foundation Models
- Deep Learning (Tübingen)
- Deep Learning Playlist
Scientific Machine Learning
Practical Machine Learning
- Full Stack Deep Learning
- Practical Deep Learning for Coders
- Stanford MLSys Seminars
- Machine Learning Engineering for Production (MLOps)
- MIT Introduction to Data-Centric AI
Natural Language Processing
- Stanford CS25 - Transformers United
- NLP Course (Hugging Face)
- CS224N: Natural Language Processing with Deep Learning
- CMU Neural Networks for NLP
- CS224U: Natural Language Understanding
- CMU Advanced NLP 2021/2022
- Multilingual NLP
- Advanced NLP
Computer Vision
- CS231N: Convolutional Neural Networks for Visual Recognition
- Deep Learning for Computer Vision
- Deep Learning for Computer Vision (DL4CV)
Reinforcement Learning
- Deep Reinforcement Learning
- Reinforcement Learning Lecture Series (DeepMind)
- Reinforcement Learning (Polytechnique Montreal, Fall 2021)
- Foundations of Deep RL
- Stanford CS234: Reinforcement Learning
Graph Machine Learning
Multi-Task Learning
Others
An introductory course in machine learning that covers the basic theory, algorithms, and applications.
- Lecture 1: The Learning Problem
- Lecture 2: Is Learning Feasible?
- Lecture 3: The Linear Model I
- Lecture 4: Error and Noise
- Lecture 5: Training versus Testing
- Lecture 6: Theory of Generalization
- Lecture 7: The VC Dimension
- Lecture 8: Bias-Variance Tradeoff
- Lecture 9: The Linear Model II
- Lecture 10: Neural Networks
- Lecture 11: Overfitting
- Lecture 12: Regularization
- Lecture 13: Validation
- Lecture 14: Support Vector Machines
- Lecture 15: Kernel Methods
- Lecture 16: Radial Basis Functions
- Lecture 17: Three Learning Principles
- Lecture 18: Epilogue
To learn some of the basics of ML:
- Linear Regression and Gradient Descent
- Logistic Regression
- Naive Bayes
- SVMs
- Kernels
- Decision Trees
- Introduction to Neural Networks
- Debugging ML Models ...
A series of mini lectures covering various introductory topics in ML:
- Explainability in AI
- Classification vs. Regression
- Precession vs. Recall
- Statistical Significance
- Clustering and K-means
- Ensemble models ...
Course providing an in-depth overview of neural networks.
- Backpropagation
- Spelled-out intro to Language Modeling
- Activation and Gradients
- Becoming a Backprop Ninja
Covers the application of deep learning for art, aesthetics, and creativity.
- Nostalgia -> Art -> Creativity -> Evolution as Data + Direction
- Efficient GANs
- Explorations in AI for Creativity
- Neural Abstractions
- Easy 3D Content Creation with Consistent Neural Fields ...
Covers the foundations of deep learning, how to build different neural networks(CNNs, RNNs, LSTMs, etc...), how to lead machine learning projects, and career advice for deep learning practitioners.
- Deep Learning Intuition
- Adversarial examples - GANs
- Full-cycle of a Deep Learning Project
- AI and Healthcare
- Deep Learning Strategy
- Interpretability of Neural Networks
- Career Advice and Reading Research Papers
- Deep Reinforcement Learning
🔗 Link to Course 🔗 Link to Materials
To learn some of the most widely used techniques in ML:
- Optimization and Calculus
- Overfitting and Underfitting
- Regularization
- Monte Carlo Estimation
- Maximum Likelihood Learning
- Nearest Neighbours
- ...
The course serves as a basic introduction to machine learning and covers key concepts in regression, classification, optimization, regularization, clustering, and dimensionality reduction.
- Linear regression
- Logistic regression
- Regularization
- Boosting
- Neural networks
- PCA
- Clustering
- ...
Covers many fundamental ML concepts:
- Bayes rule
- From logic to probabilities
- Distributions
- Matrix Differential Calculus
- PCA
- K-means and EM
- Causality
- Gaussian Processes
- ...
The course covers the standard paradigms and algorithms in statistical machine learning.
- KNN
- Bayesian decision theory
- Convex optimization
- Linear and ridge regression
- Logistic regression
- SVM
- Random Forests
- Boosting
- PCA
- Clustering
- ...
This course covers topics such as how to:
- Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems
- Create random forests and regression models
- Deploy models
- Use PyTorch, the world’s fastest growing deep learning software, plus popular libraries like fastai and Hugging Face
- Foundations and Deep Dive to Diffusion Models
- ...
A seminar series on all sorts of topics related to building machine learning systems.
Specialization course on MLOPs by Andrew Ng.
Covers the emerging science of Data-Centric AI (DCAI) that studies techniques to improve datasets, which is often the best way to improve performance in practical ML applications. Topics include:
- Data-Centric AI vs. Model-Centric AI
- Label Errors
- Dataset Creation and Curation
- Data-centric Evaluation of ML Models
- Class Imbalance, Outliers, and Distribution Shift
- ...
To learn some of the latest graph techniques in machine learning:
- PageRank
- Matrix Factorizing
- Node Embeddings
- Graph Neural Networks
- Knowledge Graphs
- Deep Generative Models for Graphs
- ...
To learn the probabilistic paradigm of ML:
- Reasoning about uncertainty
- Continuous Variables
- Sampling
- Markov Chain Monte Carlo
- Gaussian Distributions
- Graphical Models
- Tuning Inference Algorithms
- ...
This course introduces students to machine learning in healthcare, including the nature of clinical data and the use of machine learning for risk stratification, disease progression modeling, precision medicine, diagnosis, subtype discovery, and improving clinical workflows.
To learn some of the fundamentals of deep learning:
- Introduction to Deep Learning
The course starts off gradually from MLPs (Multi Layer Perceptrons) and then progresses into concepts like attention and sequence-to-sequence models.
🔗 Link to Course
🔗 Lectures
🔗 Tutorials/Recitations
To learn some of the widely used techniques in deep learning:
- Machine Learning Basics
- Error Analysis
- Optimization
- Backpropagation
- Initialization
- Batch Normalization
- Style transfer
- Imitation Learning
- ...
To learn the latest and most widely used techniques in deep unsupervised learning:
- Autoregressive Models
- Flow Models
- Latent Variable Models
- Self-supervised learning
- Implicit Models
- Compression
- ...
To learn some of the advanced techniques in deep learning:
- Neural Nets: rotation and squashing
- Latent Variable Energy Based Models
- Unsupervised Learning
- Generative Adversarial Networks
- Autoencoders
- ...
To learn about foundation models like GPT-3, CLIP, Flamingo, Codex, and DINO.
This course introduces the practical and theoretical principles of deep neural networks.
- Computation graphs
- Activation functions and loss functions
- Training, regularization and data augmentation
- Basic and state-of-the-art deep neural network architectures including convolutional networks and graph neural networks
- Deep generative models such as auto-encoders, variational auto-encoders and generative adversarial networks
- ...
- The Basics of Scientific Simulators
- Introduction to Parallel Computing
- Continuous Dynamics
- Inverse Problems and Differentiable Programming
- Distributed Parallel Computing
- Physics-Informed Neural Networks and Neural Differential Equations
- Probabilistic Programming, AKA Bayesian Estimation on Programs
- Globalizing the Understanding of Models
This course consists of lectures focused on Transformers, providing a deep dive and their applications
- Introduction to Transformers
- Transformers in Language: GPT-3, Codex
- Applications in Vision
- Transformers in RL & Universal Compute Engines
- Scaling transformers
- Interpretability with transformers
- ...
Learn about different NLP concepts and how to apply language models and Transformers to NLP:
- What is Transfer Learning?
- BPE Tokenization
- Batching inputs
- Fine-tuning models
- Text embeddings and semantic search
- Model evaluation
- ...
To learn the latest approaches for deep learning based NLP:
- Dependency parsing
- Language models and RNNs
- Question Answering
- Transformers and pretraining
- Natural Language Generation
- T5 and Large Language Models
- Future of NLP
- ...
To learn the latest neural network based techniques for NLP:
- Language Modeling
- Efficiency tricks
- Conditioned Generation
- Structured Prediction
- Model Interpretation
- Advanced Search Algorithms
- ...
To learn the latest concepts in natural language understanding:
- Grounded Language Understanding
- Relation Extraction
- Natural Language Inference (NLI)
- NLU and Neural Information Extraction
- Adversarial testing
- ...
To learn:
- Basics of modern NLP techniques
- Multi-task, Multi-domain, multi-lingual learning
- Prompting + Sequence-to-sequence pre-training
- Interpreting and Debugging NLP Models
- Learning from Knowledge-bases
- Adversarial learning
- ...
To learn the latest concepts for doing multilingual NLP:
- Typology
- Words, Part of Speech, and Morphology
- Advanced Text Classification
- Machine Translation
- Data Augmentation for MT
- Low Resource ASR
- Active Learning
- ...
To learn advanced concepts in NLP:
- Attention Mechanisms
- Transformers
- BERT
- Question Answering
- Model Distillation
- Vision + Language
- Ethics in NLP
- Commonsense Reasoning
- ...
Stanford's Famous CS231n course. The videos are only available for the Spring 2017 semester. The course is currently known as Deep Learning for Computer Vision, but the Spring 2017 version is titled Convolutional Neural Networks for Visual Recognition.
- Image Classification
- Loss Functions and Optimization
- Introduction to Neural Networks
- Convolutional Neural Networks
- Training Neural Networks
- Deep Learning Software
- CNN Architectures
- Recurrent Neural Networks
- Detection and Segmentation
- Visualizing and Understanding
- Generative Models
- Deep Reinforcement Learning
🔗 Link to Course 🔗 Link to Materials
To learn some of the fundamental concepts in CV:
- Introduction to deep learning for CV
- Image Classification
- Convolutional Networks
- Attention Networks
- Detection and Segmentation
- Generative Models
- ...
To learn modern methods for computer vision:
- CNNs
- Advanced PyTorch
- Understanding Neural Networks
- RNN, Attention and ViTs
- Generative Models
- GPU Fundamentals
- Self-Supervision
- Neural Rendering
- Efficient Architectures
To learn about concepts in geometric deep learning:
- Learning in High Dimensions
- Geometric Priors
- Grids
- Manifolds and Meshes
- Sequences and Time Warping
- ...
To learn the latest concepts in deep RL:
- Intro to RL
- RL algorithms
- Real-world sequential decision making
- Supervised learning of behaviors
- Deep imitation learning
- Cost functions and reward functions
- ...
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence.
- Introduction to RL
- Dynamic Programming
- Model-free algorithms
- Deep reinforcement learning
- ...
To learn full-stack production deep learning:
- ML Projects
- Infrastructure and Tooling
- Experiment Managing
- Troubleshooting DNNs
- Data Management
- Data Labeling
- Monitoring ML Models
- Web deployment
- ...
Covers the fundamental concepts of deep learning
- Single-layer neural networks and gradient descent
- Multi-layer neural networks and backpropagation
- Convolutional neural networks for images
- Recurrent neural networks for text
- Autoencoders, variational autoencoders, and generative adversarial networks
- Encoder-decoder recurrent neural networks and transformers
- PyTorch code examples
🔗 Link to Course 🔗 Link to Materials
Covers the most dominant paradigms of self-driving cars: modular pipeline-based approaches as well as deep-learning based end-to-end driving techniques.
- Camera, lidar and radar-based perception
- Localization, navigation, path planning
- Vehicle modeling/control
- Deep Learning
- Imitation learning
- Reinforcement learning
Designing autonomous decision making systems is one of the longstanding goals of Artificial Intelligence. Such decision making systems, if realized, can have a big impact in machine learning for robotics, game playing, control, health care to name a few. This course introduces Reinforcement Learning as a general framework to design such autonomous decision making systems.
- Introduction to RL
- Multi-armed bandits
- Policy Gradient Methods
- Contextual Bandits
- Finite Markov Decision Process
- Dynamic Programming
- Policy Iteration, Value Iteration
- Monte Carlo Methods
- ...
🔗 Link to Course 🔗 Link to Materials
A mini 6-lecture series by Pieter Abbeel.
- MDPs, Exact Solution Methods, Max-ent RL
- Deep Q-Learning
- Policy Gradients and Advantage Estimation
- TRPO and PPO
- DDPG and SAC
- Model-based RL
Covers topics from basic concepts of Reinforcement Learning to more advanced ones:
- Markov decision processes & planning
- Model-free policy evaluation
- Model-free control
- Reinforcement learning with function approximation & Deep RL
- Policy Search
- Exploration
- ...
🔗 Link to Course 🔗 Link to Materials
This is a graduate-level course covering different aspects of deep multi-task and meta learning.
- Multi-task learning, transfer learning basics
- Meta-learning algorithms
- Advanced meta-learning topics
- Multi-task RL, goal-conditioned RL
- Meta-reinforcement learning
- Hierarchical RL
- Lifelong learning
- Open problems
🔗 Link to Course 🔗 Link to Materials
A course introducing foundations of ML for applications in genomics and the life sciences more broadly.
- Interpreting ML Models
- DNA Accessibility, Promoters and Enhancers
- Chromatin and gene regulation
- Gene Expression, Splicing
- RNA-seq, Splicing
- Single cell RNA-sequencing
- Dimensionality Reduction, Genetics, and Variation
- Drug Discovery
- Protein Structure Prediction
- Protein Folding
- Imaging and Cancer
- Neuroscience
This is course is from Peter Abbeel and covers a review on reinforcement learning and continues to applications in robotics.
- MDPs: Exact Methods
- Discretization of Continuous State Space MDPs
- Function Approximation / Feature-based Representations
- LQR, iterative LQR / Differential Dynamic Programming
- ...
🔗 Link to Course 🔗 Link to Materials
Reach out on Twitter if you have any questions.
If you are interested to contribute, feel free to open a PR with a link to the course. It will take a bit of time, but I have plans to do many things with these individual lectures. We can summarize the lectures, include notes, provide additional reading material, include difficulty of content, etc.
You can now find ML Course notes here.