/course-content

NMA Computational Neuroscience course

Primary LanguageJupyter NotebookCreative Commons Attribution 4.0 InternationalCC-BY-4.0

NeuroMatch Academy (NMA) Computational Neuroscience syllabus

The content should primarily be accessed from our new ebook: https://compneuro.neuromatch.io/

Objectives: Introduce traditional and emerging computational neuroscience tools, their complementarity, and what they can tell us about the brain. A main focus is on modeling choices, model creation, model evaluation and understanding how they relate to biological questions.

Day structure: Opening keynote, 3h15min lecture/live Q&A/tutorial modules, 40min group discussion, 55min interpretation lecture (Outro) + several live Q&As (what did we learn today, what does it mean, underlying philosophy). There will also be many networking activities (professional development sessions, yoga, social hangouts etc)! Details below.

Note for visitors from China: This repository contains many links to YouTube and Google Colab. We have a version of the repository with those same videos posted on bilibili, and the Google Colab links replaced with links to Aliyun. Please visit the China Accessible Neuromatch Course-Content

Prerequisites: See here

Course materials

Group projects are offered for the interactive track only and will be running during all 3 weeks of NMA!

Course outline

  • Week 0 (Optional)

    • Asynchronous: Python Workshop Part 1 for students + Mandatory TA training for ALL TAS
    • Asynchronous: Python Workshop Part 2 for students + Mandatory TA training for ALL TAS
    • Wed, June 30th: Linear Algebra (Mandatory for all Tutorial TAs). Project TAs have separate training.
    • Thus, July 1st:Calculus (Mandatory for all Tutorial TAs). Project TAs have separate training.
    • Fri, July 2nd: Probability & Statistics (Mandatory for all Tutorial TAs). Project TAs have separate training.
  • Week 1

    • Mon, July 5: Model Types
    • Tue, July 6: Modeling Practice
    • Wed, July 7: Model Fitting
    • Thu, July 8: Generalized Linear Models
    • Fri, July 9: Dimensionality Reduction
  • Week 2

    • Mon, July 12: Deep Learning
    • Tue, July 13: Linear Systems
    • Wed, July 14: Biological Neuron Models
    • Thu, July 15: Dynamic Networks
    • Fri, July 16: Project day!
  • Week 3

    • Mon, July 19: Bayesian Decisions
    • Tue, July 20: Hidden Dynamics
    • Wed, July 21: Optimal Control
    • Thu, July 22: Reinforcement Learning
    • Fri, July 23: Network Causality

Daily schedule

All days (except W1D2, W2D5, and W3D5) will follow this schedule for course time:

Time (Hour) Lecture
-0:35-0:00* Intro video & text
0:00-0:15 Pod discussion I
0:15-1:45 Tutorials + nano-lectures I
1:45-2:45 Big break
2:45-4:15 Tutorials + nano-lectures II
4:15-4:25 Pod dicussion II
4:25-4:30 Reflections & content checks
4:35-5:05** Outro
  • the Intro / keynote will be watched asynchronously, which means that you can watch this lecture before the start of the day ** please watch the Outro on your own after having completed the tutorials and before the Q&A session

On W2D1, W2D4, and W3D4:

Time (Hour) Lecture
5:10-6:10 Live Q&A

On W1D2 (project launch day):

Time (Hour) Lecture
-0:20-0:00* Intro video & text
0:00-0:15 Pod discussion I
0:15-2:00 Tutorials + nano-lectures I
2:00-2:15 Outro
2:15-3:15 Big break
3:15-5:00 Literature review
5:00-5:15 Break
5:15-8:00** Project proposal

** Note that this includes the next available project time, which may be on the next day.

On W2D5 (abstract writing day):

Time (Hour) Lecture
0:00-2:20* Abstract workshop
2:20-2:50 Break
2:50-4:20 Individual abstract editing
4:20-5:05 Mentor meeting
5:05-5:25 Break
5:25-6:25 Pod abstract swap
6:25-8:00 Finalize abstract
  • This day is completely asynchronous, so you should combine tutorial and project time for a total of 8 hours.

Licensing

CC BY 4.0

CC BY 4.0 BSD-3

The contents of this repository are shared under under a Creative Commons Attribution 4.0 International License.

Software elements are additionally licensed under the BSD (3-Clause) License.

Derivative works may use the license that is more appropriate to the relevant context.