This course covers basic concepts in machine learning in high dimension, and the importance of regularization. We study in detail high-dimensional linear models regularized by the Euclidean norm, including ridge regression, ridge logistic regression and support vector machines. We then show how positive definite kernels allows to transform these linear models into rich nonlinear models, usable even for non-vectorial data such as strings and graphs, and convenient for integrating heterogeneous data.
Slides for the course can be found here: Slides
For practical sessions, a working jupyter notebook setup is required. Course material will be done in python.
See the challenge's website
For practice exercises and quizzes, please check out the material from past courses 2019, 2020
- Jean-Philippe Vert (Prof.)
- Julien Mairal (Prof.)
- Michael Arbel (Prof.)
- Romain Menegaux (T.A.)