The main two files are vocoder.py and evaluation.py
vocoder.py: Add "-m 1' to retrieve vocoder parameters from audios and use '-m 2' to reconstruct audios using vocoder parameters.
evaluation.py: Calculates the average stoi, fwSNRseg, and CD across trials for each audio included.
Once you have a .mat file containing your stimulus features (vocoder parameters) in CND format and brain data in CND format, you can do the following:
- If not already done: for MEG data, there is an extra step using meg.m to get the data in CND format
- If not already done: Preprocess the brain data using vocoder_reconstruction.m
- Resample the stimulus features using resampleFeatures.m
- Combine the preprocessed brain data into one matrix using combine_subs.m
- Use mcca.m to get MCCA components for the brain data matrix
- Use create_subject to create a new CND format subject out of these components
- Create a model using this subject with vocoder_reconstruction.m
- Analyse the results for this model using evaluate_model.m
Parameter values will need to be changed to match the filenames, number of subjects, channels, trials, fs, min trial length, etc. of your data.