Neuromatch Academy Deep Learning (NMA-DL) syllabus
The content should primarily be accessed from our new ebook: https://deeplearning.neuromatch.io/ [under continuous development]
August 2-20, 2021
Objectives: Gain hands-on, code-first experience with deep learning theories, models, and skills that are useful for applications and for advancing science. We focus on how to decide which problems can be tackled with deep learning, how to determine what model is best, how to best implement a model, how to visualize / justify findings, and how neuroscience can inspire deep learning. And throughout we emphasize the ethical use of DL.
Please check out expected prerequisites here!
Confirmed speakers:
- Amita Kapoor (U Delhi)
- Anima Anandkumar (Caltech)
- Aude Oliva (MIT)
- Chelsea Finn (Stanford)
- Emily Denton (Google)
- Geoffrey Hinton (U Toronto)
- Joao Sedoc (NYU)
- Kyunghyun Cho (NYU)
- Melanie Mitchell (Santa Fe Institute)
- Yann LeCun (Facebook)
- Yoshua Bengio (MILA)
Course materials
Coming soon... stay tuned...
Course outline
Week 1: The basics
Mon, August 2, 2021: Intro to DL academy
coordinated by Konrad Kording (U Penn)
Description Welcome, introduction to Google Colab, meet and greet, a bit of DL history, DL basics and introduction to Pytorch
Tue, August 3, 2021: Linear DL
coordinated by Andrew Saxe (Oxford)
Description Gradients, AutoGrad, linear regression, concept of optimization, loss functions, designing deep linear systems and how to train them
Wed, August 4, 2021: Multi-layer Perceptrons (MLPs)
coordinated by Surya Ganguli (Stanford)
Description From neuroscience inspiration, to solving the XOR problem, to function approximation, cross-validation, training, and trade-offs
Thu, August 5, 2021: Optimization
coordinated by Ioannis Mitliagkas (MILA)
Description Why optimization is hard and all the tricks to get it to work
Fri, August 6, 2021: Regularization
coordinated by Lyle Ungar (U Penn)
Description The problem of overfitting and different ways to solve it
Week 2: Doing more with fewer parameters
Mon, August 9, 2021: Parameter sharing: Convnets and RNNs
coordinated by Alona Fyshe (U Alberta)
Description How the number of parameters affects generalization, and what Convolutional Neural Networks (Convnets) and Recurrent Neural Networks (RNNs) can do for you to help
Tue, August 10, 2021: Modern Convnets
coordinated by Alexander Ecker (U Goettingen)
Description Modern Convolutional Neural Nets and how to use them for Transfer Learning
Wed, August 11, 2021: Modern RNNs
coordinated by James Evans (DeepAI)
Description Memory, time series, recurrence, vanishing gradients and embeddings
Thu, August 12, 2021: Attention and Transformers
coordinated by He He (NYU)
Description How attention helps classification, encoding and decoding
Fri, August 13, 2021: Generative Models (VAEs & GANs)
coordinated by Vikash Gilja (UCSD) and Akash Srivastava (MIT-IBM)
Description Variational Auto-Encoders (VAEs) and Generative Adversarial Networks (GANs) as methods for representing latent data statistics
Week 3: Advanced methods
Mon, August 16, 2021: Projects day
coordinated by Project TAs
Tue, August 17, 2021: Unsupervised and Self-supervised Learning
coordinated by Blake Richards (McGill) and Tim Lillicrap (Google DeepMind)
Description Learning without direct supervision
Wed, August 18, 2021: Basic Reinforcement Learning (RL) ideas
coordinated by Jane Wang (Google DeepMind) and Feryal Behbahani (Google DeepMind)
Description How RL can help solve DL problems
Thu, August 19, 2021: RL for games
coordinated by Tim Lillicrap (Google DeepMind) and Blake Richards (McGill)
Description Get to learn how RL solved the game of Go
Fri, August 20, 2021: Continual Learning / Causality / Future stuff & Finishing Proposals and Wrap-up
coordinated by Joshua T. Vogelstein (Johns Hopkins) and Vincenzo Lomonaco (U Pisa)
Description How can we get a causality, how to generalize out of sample, what will the future bring?
Description After the tutorials the day is dedicated to group projects and celebrating course completion
Licensing
The contents of this repository are shared under under a Creative Commons Attribution 4.0 International License.
Software elements are additionally licensed under the BSD (3-Clause) License.
Derivative works may use the license that is more appropriate to the relevant context.