This is a part of my final year project where we try to analyse EEG signals Electroencephlogram, which are recording of the electrical activity of the brain from the scalp.
For this experiment we made use of EMOTIV Epoc+ device to collect EEG signals and get the Brain Activation Map videos for individual subjects.
- Python 2.7.6
- Tensorflow
- scikit-learn : for performance metrics
- EMOTIV Epoc Brain Activity Map
- OpenCV2: for Image and Video Processing
- Pre-Trained VGG16
- The EEG signals and Brain Activation Maps were collected using EMOTIV Epoc+ device mentioned above.
- We collected samples from over 50 subjects in which they were shown a list of 25 words and they were supposed to tell whether or not they knew the meaning.The recording of one of the subject was of no use because of too much noise and hence was discarded.
- Following is the distribution of Training and Test Set.
- Training Set: Contains 1125 instances of 45 subjects who saw 25 word each
- Test Set: Contains 125 instances of a remaining subjects
- In this experiment we used Convolutional Neural Networks(CNNs) and Recurrent Neural Networks(RNNs)
- The model that we used is taken from: References
- The images are cropped out of BAM videos for each individual which were each 1:45 min long.
- FPS is approximately : 19.09 frames per second.
- there is a 2 second transition period from one word to another.
- We make use of two models for extracting features from CNNs one is
EEG_Model.py
in which we train the CNNs from our images as well - Other model is
EEG_VGG_Model.py
where we extract the features frompool5
layer of the popular VGGNet trained on ImageNet - The Model is trained for 100 epochs for the first model and 50 epochs the second model.
First Model:
- Accuracy: 0.758
- Precision: 0.75
- F1_Score: 0.875
Second Model:
- Accuracy: 0.28
- One of the major concern over here is that we do not have enough data to train RNNs over a skewed dataset such as ours which is self evident in the second model
- Train the model using the images constructed from EEG signals as specified in References.
- We used the BAM videos from the theta frequency bands only we should incorporate Beta and Alpha Frequency bands as well
- Develop Context Based Word Familiarity rather than the Unigram approach that we have made use of, make use of N-gram Word Apporach to understand how an individual percieves the meaning of a word
MIT