This repository contains the code, stimuli, pretrained models, and simulation results reported in the following paper:
Orhan AE (2022) Can deep learning match the efficiency of human visual long-term memory in storing object details? arXiv:2204.13061.
Part of the code here is adapted from Andrej Karpathy's minimalistic GPT (minGPT) implementation.
-
stimuli
: contains the study and test images used in the simulated versions of the Brady et al. (2008) and Konkle et al. (2010) experiments. -
results
: contains the simulation results from all experiments and code for reading and plotting the results. -
scripts
: contains example SLURM scripts for running the code on an HPC cluster. -
mingpt
: contains utility functions for the iGPT model (adapted from Andrej Karpathy's minGPT implementation).
-
train.py
: trains an iGPT model on a given dataset. -
finetune.py
: finetunes a model on the study set of a recognition memory experiment. -
run_random_noise_expt.py
: runs the random noise experiment reported in Figure 3c in the paper. -
test.py
: evaluates a model on the test set of a recognition memory experiment. -
generate.py
: generates samples from an iGPT model.
-
iGPT-S-ImageNet.pt
: iGPT-S model pretrained on ImageNet (1.9 GB). -
iGPT-mini-ImageNet.pt
: iGPT-mini model pretrained on ImageNet (0.5 GB). -
iGPT-S-SAYCam.pt
: iGPT-S model pretrained on SAYCam (1.9 GB). -
iGPT-S-SAYCam-0.1.pt
: iGPT-S model pretrained on 10% of SAYCam (1.9 GB). -
iGPT-S-SAYCam-0.01.pt
: iGPT-S model pretrained on 1% of SAYCam (1.9 GB).