How is the EEG reconstructed in the paper
Opened this issue · 4 comments
Hello, I hope to use your pre-processed EEG data to carry out my further research. I really want to know how you use this data to realize the reconstruction of imagination images.
Looking forward to your reply.
reesy
Hi, which EEG dataset are you interested in specifically?
You can download our preprocessed version of all cognitive data sources here: https://drive.google.com/file/d/1pWwIiCdB2snIkgJbD1knPQ6akTPW_kx0/view
Or you can use the CogniVal command line interface to use the evaluation framework: https://github.com/DS3Lab/cognival-cli/
Nora
Hello, I have two questions:
-
For fMRI: the data set you provided contains 100, 500 and 1000 dimensions. I wonder where the reserved voxels correspond to the brain regions respectively?I wonder if you still have the corresponding original file?
-
For EEG: How do you realize the reconstruction of brain images?
I prefer to use the Zuco dataset that you used, are you directly with the dataset provided from https://drive.google.com/file/d/1pWwIiCdB2snIkgJbD1knPQ6akTPW_kx0/view to refactor?How to implement the refactoring?
(I'm very sorry that I prefer NLP, and I still have a little lack of knowledge about cognition.Thanks again for your reply.)
Hi,
-
For fMRI the voxels are chosen randomly, not according to any specific brain regions. You can find the links to the original datasets here: https://github.com/norahollenstein/cognitiveNLP-dataCollection/wiki/Functional-magnetic-resonance-imaging-(fMRI)
-
For EEG: Yes, the files in the Google link are the exactly files we used for CogniVal. If you need the original ZuCo data, you can find it here: https://osf.io/q3zws/
Sorry to have a question, for fMRI data, how is the cognitive signal obtained for the same word in different contexts in the article?
(To take an average?)