/2019_20

2019-20 unit material for Information Processing and the Brain

Primary LanguageTeX

Information Processing and the Brain 2019/2020

Here you can find the relevant content for Information Processing and the Brain 2019/2020. This unit covers several aspects of information processing in the brain, such as sensory processing, probabilistic codes, deep learning, recurrent neural networks, credit assignment, reinforcement learning and model-based inference.

It is jointly taught by Conor Houghton and Rui Ponte Costa at the Department of Computer Science [School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics], Faculty of Engineering, University of Bristol.

You should go to the io page for links to the reddit and to the 2018-19 version of this unit: comsm0034.github.io

Recommended reading:

This field is highlight interdisciplinary, as such there is no single textbook that covers all our lectures. Relevant research papers will be highlighted during the lectures. However, below we highlight in bold the most relevant ones for this unit.

Theoretical neuroscience:

  1. Theoretical Neuroscience by P Dayan and L F Abbott (MIT Press 2001), see also errata.
  2. Neuronal Dynamics by Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski. Full version online.
  3. Introduction To The Theory Of Neural Computation, Volume I by John Hertz. (Classical and accessible book on neural computation)
  4. Bayesian Brain: Probabilistic Approaches to Neural Coding
  5. Elements of Information Theory by TM Cover and JA Thomas (Wiley), worth owning, but there is an online pdf from www.cs-114.org

Machine/statistical Learning:

  1. General ML book: Information Theory, Inference and Learning Algorithms by David MacKay. Full version available online
  2. Deep Learning (including Recurrent neural nets): Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville
  3. Unsupervised learning: Natural Image Statistics by Aapo Hyvarinen, Jarmo Hurri, and Patrik O. Hoyer. Full version available online.
  4. Reinforcement learning: Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto. Full version available online.

Super useful math/stat cheat-sheet by Iain Murray: homepages.inf.ed.ac.uk/imurray2/pub/cribsheet.pdf

Draft schedule

Conor (weeks 1-5):

  • Lectures 1-3: Information theory
  • Lectures 4-5: Statistical theory
  • Lectures 6-9: Probabilistic brain

Guest lecturer (week 5):

  • Lecture 10: Guest lecture on the probabilistic brain (Laurence Aitchison)

Rui (weeks 6-10): Neural circuits and learning

Office hours: Tuesdays 12-2pm @ 3.26 MVB
  • Lecture 11: Different forms of learning
  • Lecture 12: Visual System: deep learning?
  • Lecture 13: Supervised learning and backprop
  • Lecture 14: Unsupervised learning
  • Lecture 15: Reinforcement Learning
  • Lecture 16: Recurrent neural networks
  • Lecture 17: RNNs and Brain vs machine

Note: Labs for CW2 on weeks 7 will start with lecture in the first hour.

Note2: There wont be a lecture during the lab on week 10 so that you can focus on CW2.

Cian (week 11):

  • Lectures 18-19: Neural Data Analysis

Revision week (week 12):

  • Lecture 20: Conor's revision
  • Lecture 21: Rui/Cian revision

Coursework:

  • CW1: Problem sheet on probabilities
  • CW2: Coursework on learning (labs on week 7 (12/11) and week 10 (03/12))