/brain2pix

Primary LanguageJupyter Notebook

Brain2Pix: Supplementary materials

Introduction

Welcome to the repository that contains supplementary materials and the source code for the paper "Brain2Pix: Fully convolutional naturalistic video reconstruction from brain activity".

The brain2pix model consists of 2 parts. 1) Making RFSimages, and 2) training the GAN-like model. For reproducing the experiment, first check out the data_preprocessing files for making the RFSimages and then the experiment files for training the model.

Folders

data_preprocessing: this folder entails all the steps of transforming raw brain signals into RFSimages.

experiment: codes containing the model and training loop for the experiments.

visualizations: reconstruction videos in GIF format and figures in PFD format.

Results

Main results -- FixedRF (test set):

fixed RFSimage | reconstruction | ground truth

fixedRF_recons_of_all_frames_as_video_a

fixedRF_recons_of_all_frames_as_video_b

fixedRF_recons_of_all_frames_as_video_c

Main results -- LearnedRF (test set):

learned RFSimage | reconstruction | ground truth

learnedRF_recons_of_all_frames_as_video_a

learned_RF_recons_of_all_frames_as_video_b

learned_RF_recons_of_all_frames_as_video_c

Additional results (test set):

recons_of_all_frames_as_video_additional_a recons_of_all_frames_as_video_additional_b

Codes:

More information on the codes of the experiments in the README inside the "experiment" folder.

To replicate the main experiment, please see the "experiment/learnableRF" and "experiment/fixedRF" folders.

Datasets:

Dr. Who: The Dr. Who dataset is publicly available. The first author of the dataset paper (Seeliger et. al., 2019) mentioned (on http://disq.us/p/23bj45d) that the link will be activated soon. For now it is available by contacting them.

vim2: This dataset was taken from http://crcns.org/, originally published by Nishimoto et. al, 2011.

References:

Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

Seeliger, K., et al. "A large single-participant fMRI dataset for probing brain responses to naturalistic stimuli in space and time." bioRxiv (2019): 687681.

Shen, G., Dwivedi, K., Majima, K., Horikawa, T., & Kamitani, Y. (2019). End-to-end deep image reconstruction from human brain activity. Frontiers in Computational Neuroscience, 13, 21.