/working-memory-facilitating-reservoir-learning

A de-anonymized submission to the NeuroAI workshop at NeurIPS 2019

Primary LanguagePythonMIT LicenseMIT

Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks

by Roman Pogodin, Dane Corneil, Alexander Seeholzer, Joseph Heng, Wulfram Gerstner

Work accepted to Real Neurons and Hidden Units: Future directions at the intersection of neuroscience and artificial intelligence at NeurIPS 2019

OpenReview link, arXiv link

Description

We experiment with stability and efficiency of local learning rules in the reservoir computing setting, when the reservoir is provided with an additional stabilizing input.

Usage

  1. experiments.py -- experiments with both networks
  2. plotting.py -- plotting tools for the experiments
  3. simulation_interface.py -- interaction with both networks
  4. reservoir_net.py -- a reservoir of neurons
  5. attractor_net.py -- a rate network with attractor dynamics
  6. utils.py -- a set of function not directly related to experiments