This repository can be used for training and evaluating encoding models to predict fMRI brain responses to naturalistic video stimuli.
1. Create a conda environment for running and evaluating the model:
export ENVNAME=algonauts-2025
conda create -n $ENVNAME python=3.12 ipython -y
conda activate $ENVNAME
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0
git clone https://github.com/facebookresearch/algonauts-2025.git
cd algonauts-2025/data_utils
pip install -e .
cd ../modeling_utils
pip install -e .
pip install transformers moviepy spacy nilearn Levenshtein "huggingface_hub[cli]" julius2. Get access to the LLAMA3.2-3B repository on HuggingFace. First, run:
huggingface-cli loginThen, create a read token and copy when prompted.
3. Set paths to the Algonauts dataset, where you want to save your results, and what Slurm partition to use. This can be done by setting corresponding values in algonauts2025/grids/defaults.py, or alternatively, by adding the following to your shell’s startup file (e.g., .bashrc, .zshrc, etc.).
export SAVEPATH="/your/save/directory"
export DATAPATH="/path/to/algonauts/dataset"
export SLURM_PARTITION="your-slurm-partition"python -m algonauts2025.grids.test_run
python -m algonauts2025.grids.run_grid
python -m algonauts2025.grids.run_ensemble
Training and results can be monitored using Weights & Biases. See the config key wandb_config.
This repository is CC-BY-NC licensed, as found in the LICENSE file.