This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper.
Human state recognition is a critical topic due to its pervasive and crucial applications in human-machine systems, and multi-modal fusion that combines metrics from multiple data sources has been shown as a sound method to improve the recognition performance. In spite of the promising results of recent multi-modal-based models, they generally fail to leverage sophisticated fusion strategies that models sufficient cross-modal interactions to produce the fusion representation, and rely heavily on lengthy and inconsistent date preprocessing and feature crafting. To address these limitations, we propose an end-to-end multi-modal transformer framework for multi-modal human state recognition called Husformer. Specifically, we propose cross-modal transformers, which inspire one modality to directly attend to latent relevance revealed in other modalities to reinforce itself, to fuse different modalities with sufficient awareness of cross-modal interactions introduced. A self-attention transformer is then utilized to further prioritize the important contextual information of human state in the fusion representation. Additionally, such two attention mechanisms enable effective and adaptive adjustments to noise and interruptions in multi-modal signals during the fusion process and at high-level feature level respectively. Extensive experiments on two human emotion (DEAP and WESAD) corpus and two cognitive workload (MOCAS and CogLoad) datasets demonstrate that our Husformer outperform state-of-the-art multi-modal baselines and the performance with a single modality in the recognition of human state by a large margin, especially when dealing with raw multi-modal signals. An ablation study is also conducted to show the benefits of each component in the Husformer.
Name | Modalities | Acc(%) | F1(%) | Dataset |
---|---|---|---|---|
Raw-MOCAS | 5 | 93.71±2.26 | 93.82±2.41 | Raw-MOCAS |
Preprocessed-MOCAS | 5 | 96.42±2.11 | 96.51±2.03 | Pre-MOCAS |
Raw-DEAP(Valance) | 4 | 85.98±1.38 | 86.21±1.40 | Raw-DEAP |
Raw-DEAP(Arousal) | 4 | 86.28±2.04 | 86.78±2.11 | Raw-DEAP |
Preprocessed-DEAP(Valance) | 4 | 97.01±2.06 | 97.08±2.15 | Pre-DEAP |
Preprocessed-DEAP(Arousal) | 4 | 97.67±1.45 | 97.69±1.53 | Pre-DEAP |
WESAD | 6 | 85.02±1.91 | 85.85±2.14 | WESAD |
Cogload | 5 | 80.40±2.34 | 81.27±2.63 | Cogload |
- Python 3.8
- Pytorch (1.8.2+cu111) and torchvision
- CUDA 11.1 or above
- Scikit-learn 1.0.2
- Numpy 1.19.5
(The code was tested in Ubuntu 18.04 with Python 3.8.)
Downloading addresses of datasets including DEAP, WESAD, MOCAS and CogLoad can be found in the above table.
Husformer reads and loads data from 'Husformer.pkl' in data/ for training and testing.
Before starting to run the training or testing commands, you should convert the data file format from '.xxx', e.g., '.csv', to '.pkl', and rename the data file as 'Husformer.pkl'.
We provide Python code demos used for data format converting in make_data, and name them as 'dataset's name.py', such as: Pre-MOCAS.py and Raw-MOCAS.py. You should create a 'dataset_name_list.txt' with the downloaded dataset file path contained for the make_data codes to locate the data file.
For each dataset, we randomly shuffled all data and conducted the K-folder Cross Validation (K = 10). Thus you will get 10 '.pkl' files every time after running the make_data code.
We provide 3 model files which are corresponding to task scenarios involving 3, 4, and 5 modalities. You can follow the provided demos to make new model files if you want to use more or fewer modalities with the Husformer.
-
Move the target model files contained in folders, e.g., src/3, src/4 and src/5, from 'src/x' to src.
-
Rename the target 'main-x.py' in src, e.g., main-3.py, main-4.py, main-5.py, as 'main.py'.
- We provide converted cogload.pkl in data. You should make data as following if using other datasets.
python make_data/dataset_name.py
Then put the made '.pkl' data file in 'data/', and rename it as 'Husformer.pkl'.
- Training command as follow.
python main.py
- Testing command as follow.
python main.py --eval
If you find the code or the paper useful for your research, please cite our paper:
@article{wang2022husformer,
title={Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition},
author={Wang, Ruiqi and Jo, Wonse and Zhao, Dezhong and Wang, Weizheng and Yang, Baijian and Chen, Guohua and Min, Byung-Cheol},
journal={arXiv preprint arXiv:2209.15182},
year={2022}
}
Contributors:
Ruiqi Wang; Dezhong Zhao; Wonse Jo; Byung-Cheol Min.
Part of the code is based on the following repositories:
Multimodal-Transformer.