In this project, we use the multimodal method to recognize and analyze human emotions in combination with physical and physiological signals. We want to get a comparatively correct results given the different types of signals and list the primary components that lead to this result in these different types of signals.
- In this project, We use AMIGOS dataset, and it requires a EULA for download this dataset. Thereby we cannot share the dataset, you can download it from its official website
- This dataset includes some physical(facial expression) and psychological(EEG, EOG and GSR) signals and we use them to train and test our model.
- Details of the dataset are shown here
Substract the mean value of the baseline signal, which are 5 seconds (128*5 frames) in psychological data series.
14 channels EEG signals from 'data_preprocessed' in the AMIGOS dataset.
2 channels ECG signals from 'data_preprocessed' in the AMIGOS dataset.
1 channel GSR signal from 'data_preprocessed' in the AMIGOS dataset.
model.py
includes 3 single models
utils.py
includes some performance computation,like accuracy and f1-scores.
it includes our fusion model and test.