/Barlow_Twins_EEG

A Novel Negative-sample-free Contrastive Self-Supervised Learning for EEG-Based Motor Imagery Classification

Primary LanguagePythonApache License 2.0Apache-2.0

Barlow_Twins_EEG

A Novel Negative-sample-free Contrastive Self-Supervised Learning for EEG-Based Motor Imagery Classification

In-Nea Wang, Cheol-Hui Lee, Hakseung Kim, and Dong-Joo Kim

Introduction 🔥

Motor imagery-based brain–computer interface (MI-BCI) systems convert user intentions into computer commands, aiding the communication and rehabilitation of individuals with motor disabilities. Traditional MI classification relies on supervised learning; however, it faces challenges in acquiring large volumes of reliably labeled data and ensuring generalized high performance owing to specific experimental paradigms. To address these issues, this study proposes a contrastive self-supervised learning (SSL) method that does not require negative samples. A MultiResolutionCNN backbone, designed to capture temporal and spatial electroencephalogram (EEG) features, is introduced. By using Barlow Twins loss-based contrastive SSL, features are effectively extracted from EEG signals without labels. MI classification was performed with data from two datasets that were not used for pretraining: an in-domain dataset with two classes and out-domain dataset with four classes. The proposed framework outperformed conventional supervised learning, achieving accuracies of 82.47% and 43.19% on the in-domain and out-domain datasets, respectively. Performance variations were observed in response to variations in the values of key hyperparameters during training. The maximum differences in lambda, learning rate, and batch size were 0.93%, 3.28%, and 3.38%, respectively, demonstrating robust generalization. This study facilitates the development and application of more practical and robust MI-BCI systems.

Model Architecture

image

(A) Illustration of the proposed contrastive SSL using Barlow Twins (B) Architecture of the proposed model. The purple and navy regions represent the backbone and classifier networks of the proposed model, respectively

Main Result 🥇

Comparison with Other Supervised Learning Approaches

OpenBMI SingleArmMI
ACC MF1 κ ACC MF1 κ
FBCSP 61.03±4.46 60.5±14.55 0.22±0.09 24.86±1.76 21.84±3.48 -0.00±0.02
ShallowConvNet 78.95±5.19 78.93±5.22 0.58±0.10 37.36±2.74 36.23±3.22 0.16±0.04
DeepConvNet 77.39±5.08 77.37±5.08 0.55±0.10 36.39±1.64 35.90±1.56 0.15±0.02
EEGNet 79.45±4.68 79.41±4.69 0.59±0.09 33.75±3.62 33.76±3.67 0.12±0.05
Proposed Model (linear-prob) 64.91±5.12 64.82±5.18 0.30±0.10 34.58±4.53 34.38±4.61 0.13±0.06
Proposed Model (fine-tuning) 82.47±4.04 82.45±4.05 0.65±0.08 43.19±3.92 42.79±4.13 0.24±0.05

UMAP visualization & Confusion Matrix

image

(A) UMAP visualizations demonstrating representation vector separability. (B) Confusion matrices for MI classification with linear probing and fine-tuning

License and Citation 📰

The software is licensed under the Apache License 2.0. Please cite the following paper if you have used this code: