/eeg-emotion-recognition-with-vit

Use Vision Transformer to generate Emotion Recognition using the DEAP dataset and EEG Signals.

Primary LanguagePythonMIT LicenseMIT


Logo

@renhong-zhang/eeg-emotion-recognition-with-vit

Use Vision Transformer to generate Emotion Recognition using the DEAP dataset and EEG Signals.
This project is built with Tensorflow and PyTorch frameworks to implement EEG-based Emotion recognition. The Wavelet Transform methods DWT, CWT, and DTCWT are used to preprocess the raw EEG signals before inputting them into the ViT model. The emotion recognition test accuracy ranges from 80% to 90% with the abovementioned methods.
View Demo · Report bug · Request feature

Prerequisites python-3-8
Languages & Tools Python TensorFlow
PyTorch
License License: MIT
State Maintained

-----------------------------------------------------

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Author
  6. How can I support you?
  7. Acknowledgments
  8. License

-----------------------------------------------------

About The Project

TensorFlow Version

This code replicates the methodology described in the paper "Introducing attention mechanism for eeg signals: Emotion recognition with vision transformers" and provides empirical support for the proposed approach. This code is based on TensorFlow and improves and corrects numerous issues of the paper's code. The authors claim a 99.4% (Valence) and 99.1%(Arouse) accuracy for their original data runs, but even this still needs to be tested in practice. The actual test accuracy is at most 81%, and the CWT never reaches 97%(Valence) and 95.75%(Arouse) and is just over 60%. I emailed the author, Arjun, three months ago, and all I got back was a promise that he would update his program. For this reason, I am skeptical about their paper's results. After reading the program carefully, I found that their approach still has many inappropriate things. I tested various Wavelet Transform methods(DTCWT, DWT, and CWT), and my heavily modified model can then process EEG data for emotion recognition. All these tests have achieved test accuracy of 80% or higher, with the best reaching 85%.

PyTorch Version

This portion of the program is based on "lucidrains/vit-pytorch" and has re-implemented the same model as the TensorFlow Version. As a result, the program is clearer, more concise, and more aesthetically pleasing.

Data Processing

This directory contains DEAP Matlab data for the CWT, DTCWT, and DWT pre-processing program, including PSD, DE, MAE, DFA, etc., in the δ, γ, β, α and θ bands, with all functions included in Processing_mat_xwt.PY, Processing_xwt.py. Also included are three Matlab programs: processing_CWT.m (only this Matlab file is modified from "Introducing attention mechanism for eeg signals: Emotion recognition with vision transformers"), processing_DTCWT.py, and processing_DWT.py (these two are written by myself). The next stage is to research translating EEG signals into human-understandable text, image, or video.

Built With

Major Frameworks/Libraries TensorFlow
PyTorch

-----------------------------------------------------

Getting Started

Prerequisites

python3.8 or above.

Installation

  1. For Ternsorflow Version: Install tensorflow-gpu
  2. For PyTorch Version: Install PyTorch-cudnn
  3. Install all the relavent Libraries

-----------------------------------------------------

Usage

  1. run Processing_mat_xwt.py,
  2. run Processing_xwt.py,
  3. run Runner.py.

-----------------------------------------------------

Roadmap

  • The next stage is to research translating EEG signals into human-understandable text, image, or video.

-----------------------------------------------------

Author

demo Renhong Zhang
Github: @renhong-zhang

-----------------------------------------------------

How can I support you?

There are lots of ways to support me! I would be so happy if you give this repository a ⭐️ and tell your friends about this little corner of the Internet.

-----------------------------------------------------

Acknowledgments

-----------------------------------------------------

License

MIT

Copyright © 2022-present, Renhong Zhang