Kamitani Lab
Sharing code and data from Kamitani Lab (PI: Yukiyasu Kamitani) at Kyoto University and ATR: brain decoding, neuroimaging, machine learning, neuroinformatics
Kyoto, Japan
Pinned Repositories
bdpy
Python package for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
BHscore
brain-decoding-cookbook-public
BrainDecoderToolbox2
Matlab library for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
DeepImageReconstruction
Data and code for Shen, Horikawa, Majima, and Kamitani (2019) Deep image reconstruction from human brain activity. PLoS Comput. Biol. http://dx.doi.org/10.1371/journal.pcbi.1006633.
GenericObjectDecoding
Demo code for Horikawa and Kamitani (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nat Commun https://www.nature.com/articles/ncomms15037.
HumanDreamDecoding
Codes used in "Neural decoding of visual imagery during sleep" by Horikawa et al (Science, 2013, http://science.sciencemag.org/content/340/6132/639.long)
icnn
iCNN: image reconstruction from CNN features
IllusionReconstruction
A reconstruction framework for materializing subjective experiences from brain signals
VBCCA
Variational Bayesian Canonical Correlation Analysis
Kamitani Lab's Repositories
KamitaniLab/DeepImageReconstruction
Data and code for Shen, Horikawa, Majima, and Kamitani (2019) Deep image reconstruction from human brain activity. PLoS Comput. Biol. http://dx.doi.org/10.1371/journal.pcbi.1006633.
KamitaniLab/GenericObjectDecoding
Demo code for Horikawa and Kamitani (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nat Commun https://www.nature.com/articles/ncomms15037.
KamitaniLab/bdpy
Python package for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
KamitaniLab/icnn
iCNN: image reconstruction from CNN features
KamitaniLab/brain-decoding-cookbook-public
KamitaniLab/End2EndDeepImageReconstruction
KamitaniLab/EmotionVideoNeuralRepresentation
Data and code for reproducing results of Horikawa, Cowen, Keltner, and Kamitani (2020) The neural representation of visually evoked emotion is high-dimensional, categorical, and distributed across transmodal brain regions. iScience (https://www.cell.com/iscience/fulltext/S2589-0042(20)30245-5).
KamitaniLab/InterIndividualDeepImageReconstruction
KamitaniLab/BHscore
KamitaniLab/IllusionReconstruction
A reconstruction framework for materializing subjective experiences from brain signals
KamitaniLab/ist-group-seminar-kamitani
KamitaniLab/OpenData
Portal to open data from Kamitani Lab, Kyoto Univ. and ATR. https://kamitanilab.github.io/OpenData/
KamitaniLab/PyFastL2LiR
Fast L2-normalized linear regression
KamitaniLab/SoundReconstruction
KamitaniLab/dnn-feature-decoding
KamitaniLab/GOD_stimuli_annotations
To share captions of stimuli dataset
KamitaniLab/docker-images
KamitaniLab/EmotionVideoNeuralRepresentationPython
Python code version of EmotionVideoNeuralRepresentation (Horikawa et al., 2020)
KamitaniLab/feature-decoding
KamitaniLab/InterSiteNeuralCodeConversion
KamitaniLab/SpecVQGAN
Source code for "Taming Visually Guided Sound Generation" (Oral at the BMVC 2021)
KamitaniLab/spurious_reconstruction
KamitaniLab/feature-encoding
KamitaniLab/bdata-datasets
KamitaniLab/bdpy-1
Python package for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
KamitaniLab/brain-decoding-bootcamp
KamitaniLab/fLoc
Functional localizer experiment used to define category-selective cortical regions
KamitaniLab/fmriprep
fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
KamitaniLab/mind-vis
Code base for MinD-Vis
KamitaniLab/toolbox3d