Python tools for participating in Neural Latents Benchmark '21.
Neural Latents Benchmark '21 (NLB'21) is a benchmark suite for unsupervised modeling of neural population activity. The suite includes four datasets spanning a variety of brain areas and experiments. The primary task in the benchmark is co-smoothing, or inference of firing rates of unseen neurons in the population.
This repo contains code to facilitate participation in NLB'21:
nlb_tools/
has code to load and preprocess our dataset files, format data for modeling, and locally evaluate resultsexamples/tutorials/
contains tutorial notebooks demonstrating basic usage ofnlb_tools
examples/baselines/
holds the code we used to run our baseline methods. They may serve as helpful references on more extensive usage ofnlb_tools
The package can be installed with the following commands:
git clone https://github.com/neurallatents/nlb_tools.git
cd nlb_tools
pip install -e .
This package requires Python 3.7+ and was developed in Python 3.7, which is the Python version we recommend you use.
We recommend reading/running through examples/tutorials/basic_example.ipynb
to learn how to use nlb_tools
to load and
format data for our benchmark. You can also find Jupyter notebooks demonstrating running GPFA and SLDS for the benchmark in
examples/tutorials/
.
For more information on the benchmark:
- our main webpage contains general information on our benchmark pipeline and introduces the datasets
- our EvalAI challenge is where submissions are evaluated and displayed on the leaderboard
- our datasets are available on DANDI: MC_Maze, MC_RTT, Area2_Bump, DMFC_RSG, MC_Maze_Large, MC_Maze_Medium, MC_Maze_Small
- our paper describes our motivations behind this benchmarking effort as well as various technical details and explanations of design choices made in preparing NLB'21
- our Slack workspace lets you interact directly with the developers and other participants. Please email
fpei6 [at] gatech [dot] edu
for an invite link