/Practical_ML_WS19

Lecture materials and notebooks for the Pattern Analysis and Machine Intelligence machine learning praktikum in the winter semester 2019/2020 at Goethe University

Primary LanguageJupyter Notebook

MLPR Winter Semester 19/20

Lecture materials and notebooks for the Pattern Analysis and Machine Intelligence machine learning praktikum.

The course material is divided into corresponding week subdirectories. Each notebook has a link to open in Google's Colab at the top. You don't need any local installation, simply click on a notebook and click the link at the top to open it in your browser.

The structure of the notebooks is inspired by popular online courses (like Andrew Ng's fantastic coursera classes) in such that they have blank lines and functions that need be filled in in order for the code to work.

If you have been unable to attend the lecture in person, the recommended order to go through the material each week is the following:

  • Read the slides
  • Go through any additional references or linked material
  • Complete the notebook

Solution notebooks will be uploaded with a time delay. It is highly recommended that you attempt to complete the notebook yourself before taking a look at the solutions.

Schedule

After a general introduction the course structure is dividied into three main blocks: supervised, unsupervised and reinforcement learning. Below, each date lists the topics that will be introduced, followed by the practical exercise after the arrow:

Introduction

  • 14.10: Introduction, python tools review, software management (version-control & documentation)
  • 21.10: General ideas behind machine learning, gradient descent on functions, logistic regression -> Kaggle Titanic dataset

Supervised learning

  • 28.10: Random forests from scratch -> Revisiting Titanic and San Francisco Crime Challenge
  • 04.11: Naive Bayes -> Spam message identification
  • 11.11: Basic neural networks from scratch -> Multi-layer perceptron for classification of fashion images
  • 18.11: Introduction to PyTorch for deep learning, convolutional neural networks -> Reading traditional Japanese characters (Kuzushiji)
  • 25.11: Neural sequence models, recurrent neural networks -> Shakespeare poetry text generation

Unsupervised learning

  • 02.12: Unsupervised learning: k-means clustering and principal component analysis -> Known self-generated distributions
  • 09.12: Unsupervised neural networks, autoencoders (unsupervised feature pre-training) -> Revisiting fashion and Kuzushiji
  • 16.12: Generative models 1: variational autoencoders -> Handwritten digit generation
  • 13.01: Generative models 2: generative adversarial networks -> Face generation

Reinforcement learning

  • 20.01: Classic q-learning -> Cart pole
  • 27.01: Deep reinforcement learning, QNN -> Taxi driver
  • 03.02: Reinforce algorithm -> Robotic application (walking/grasping)
  • 10.02: State-of-the-art, open questions and existing issues -> Project pitches