/EEG-TransNet

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

EEG-TransNet

Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding [paper]

This is the PyTorch implementation of attention-based convolutional neural network with multi-modal temporal information fusion for MI-EEG decoding.

Network Architecture

Network architecture The proposed network is designed with the aim of extracting multi-modal temporal information and learning more comprehensive global dependencies. It is composed of the following four parts:

  1. Feature extraction module: The multi-modal temporal information is extracted from two distinct perspectives: average and variance.
  2. Self-attention module: The shared self-attention module is designed to capture global dependencies along these two feature dimensions.
  3. Convolutional encoder: The convolutional encoder is then designed to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features.
  4. Classification: A fully connected (FC) layer finally classifies features from the convolutional encoder into given classes.

Requirements

  • PyTorch 1.7
  • Python 3.7
  • mne 0.23

Datasets

Results

The classification results for our proposed network and other competing architectures are as follows: Results1 Results2

Citation

If you find this code useful, please cite us in your paper.

@article{ma2024attention,
title={Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding},
author={Ma, Xinzhi and Chen, Weihai and Pei, Zhongcai and Zhang, Yue and Chen, Jianer},
journal={Computers in Biology and Medicine},
pages={108504},
year={2024},
publisher={Elsevier}
}