Code and notebooks for the deep learning course dataflowr. Here is the schedule followed at école polytechnique in 2023:
- Module 1 - Introduction & General Overview Slides + notebook Dogs and Cats with VGG + Practicals (more dogs and cats)
Things to remember
- you do not need to understand everything to run a deep learning model! But the main goal of this course will be to come back to each step done today and understand them...
- to use the dataloader from Pytorch, you need to follow the API (i.e. for classification store your dataset in folders)
- using a pretrained model and modifying it to adapt it to a similar task is easy.
- if you do not understand why we take this loss, that's fine, we'll cover that in Module 3.
- even with a GPU, avoid unnecessary computations!
- Module 2a - PyTorch tensors
- Module 2b - Automatic differentiation + Practicals
- Module 3 - Loss function for classification
- MLP from scratch start of HW1
- another look at autodiff with Julia
Things to remember
- Pytorch tensors = Numpy on GPU + gradients!
- Automatic differentiation is not only the chain rule! Backpropagation algorithm or dual numbers are clever algorithms to implement automatic differentiation...
- Loss vs Accuracy. Know your loss for a classification task!
- Recap on Losses for classification and Optimization
- overfitting a MLP on CIFAR10: Stacking_layers_MLP_CIFAR10.ipynb
- Module 6: Convolutional neural network
- how to regularize with dropout and uncertainty estimation with MC Dropout: Module 15 - Dropout
Things to remember
- know your loss for a classification task!
- know your optimizer (Module 4 done at home)
- know how to build a neural net with torch.nn.module (Module 5 done at home)
- know how to use convolution and pooling layers (kernel, stride, padding)
- know how to use dropout
TBC
-
Module 1: Introduction & General Overview
- Intro: finetuning VGG for dogs vs cats 01_intro.ipynb
- Practical: Using CNN for more dogs and cats 01_practical_empty.ipynb and its solution 01_practical_sol.ipynb
-
Module 2: Pytorch tensors and automatic differentiation
- Basics on PyTorch tensors and automatic differentiation 02a_basics.ipynb
- Linear regression from numpy to pytorch 02b_linear_reg.ipynb
- Practical: implementing backprop from scratch 02_backprop.ipynb and its solution 02_backprop_sol.ipynb
- Bonus: intro to JAX: autodiff the functional way autodiff_functional_empty.ipynb and its solution autodiff_functional_sol.ipynb
- Bonus: Linear regression in JAX linear_regression_jax.ipynb
- Bonus: automatic differentiation with dual numbers AD_with_dual_numbers_Julia.ipynb
-
- hw1_mlp.ipynb and its solution hw1_mlp_sol.ipynb
-
Module 3: Loss functions for classification
- An explanation of underfitting and overfitting with polynomial regression 03_polynomial_regression.ipynb
-
Module 4: Optimization for deep leaning
- Practical: code Adagrad, RMSProp, Adam, AMSGrad 04_gradient_descent_optimization_algorithms_empty.ipynb and its solution 04_gradient_descent_optimization_algorithms_sol.ipynb
-
- Practical: overfitting a MLP on CIFAR10 Stacking_layers_MLP_CIFAR10.ipynb and its solution MLP_CIFAR10.ipynb
-
Module 6: Convolutional neural network
- Practical: build a simple digit recognizer with CNN 06_convolution_digit_recognizer.ipynb
-
Module 8: Embedding layers, Collaborative filtering and Word2vec
- Practical: Collaborative filtering with Movielens 100k dataset 08_collaborative_filtering_empty.ipynb
- Practical: Refactoring code, collaborative filtering with Movielens 1M dataset 08_collaborative_filtering_1M.ipynb
- Practical: Word Embedding (word2vec) in PyTorch 08_Word2vec_pytorch_empty.ipynb
- Finding Synonyms and Analogies with Glove 08_Playing_with_word_embedding.ipynb
-
- Practical: denoising autoencoder (with convolutions and transposed convolutions) 09_AE_NoisyAE.ipynb
-
- UNet for image segmentation UNet_image_seg.ipynb
-
- implementing Real NVP Normalizing_flows_empty.ipynb and its solution Normalizing_flows_sol.ipynb
-
Module 10 - Generative Adversarial Networks
- Conditional GAN and InfoGAN 10_GAN_double_moon.ipynb
-
Module 11 - Recurrent Neural Networks and Batches with sequences in Pytorch
- notebook used in the theory course: 11_RNN.ipynb
- predicting engine failure with RNN 11_predicitions_RNN_empty.ipynb
-
Module 12 - Attention and Transformers
- Correcting the PyTorch tutorial on attention in seq2seq: 12_seq2seq_attention.ipynb and its solution
- building a simple transformer block and thinking like transformers: GPT_hist.ipynb and its solution
-
Module 13 - Siamese Networks and Representation Learning
- learning embeddings with contrastive loss: 13_siamese_triplet_mnist_empty.ipynb
-
- Dropout on a toy dataset: 15a_dropout_intro.ipynb
- playing with dropout on MNIST: 15b_dropout_mnist.ipynb
-
- impact of batchnorm: 16_batchnorm_simple.ipynb
- Playing with batchnorm without any training: 16_simple_batchnorm_eval.ipynb
-
Module 18a - Denoising Diffusion Probabilistic Models
- Denoising Diffusion Probabilistic Models for MNIST: ddpm_nano_empty.ipynb and its solution ddpm_nano_sol.ipynb
- Denoising Diffusion Probabilistic Models for CIFAR10: ddpm_micro_sol.ipynb
-
Module - Deep Learning on graphs
- Inductive bias in GCN: a spectral perspective GCN_inductivebias_spectral.ipynb and for colab GCN_inductivebias_spectral-colab.ipynb
- Graph ConvNets in PyTorch spectral_gnn.ipynb
-
NERF
- PyTorch Tiny NERF tiny_nerf_extended.ipynb
If you want to run locally, follow the instructions of Module 0 - Running the notebooks locally
Archives are available on the archive-2020 branch.