Q. How precisely/accurately our model can classify motor behavior among - real, imagery, hand and tongue movements from ECoG neural activity?
(2. Can we use the model to show how electrodes/regions are more/less responsible for imagery v.s. real movement + hand v.s. tongue. - linear combinations of sites or non-linear (lasso) , MI, CMI - do these models explain Figure 2 (spatially) - If we train an autoregressive model we can do this for the temporal range before/during stim.)
- Classifying ECoG/EEG-Based Motor Imagery Tasks ref
- Power spectral density is used as the features.
- to handle redundancy : Fisher discriminant analysis (FDA) and common spatial patterns (CSP) < No backgorund on these>
- Simple KNN model is used as classifier.
- < Using connected paper tool> Performance of common spatial pattern under a smaller set of EEG electrodes in brain-computer interface on chronic stroke patients: A multi-session dataset study ref
- Using the CSP -rank mechanism selecting the subset of electrodes. ( question can this be done in the short time frame)
- Input:
$X_t \in \mathbf{R}^{46x3000}$ , 46 channels over 1000hz for 3000 measurements. ~ 60 samples per subject. - Filter:
$f$ - PSD
- Moving average
- Model:
$\theta$ - Feedback / preprocessing: T-score from GLM (why), PCA/ICA (what model)
- Output:
$y_t$ , label in the set [real,imaginary] & [hand,tongue].
- Decoder model:
$\theta(f(X_t)) = \hat{y}_t$ - Loss fxn : binary crossentropy:
$\frac{1}{N} \sum_{i=1}{n} y_t log(\hat{y_t}) + (1-y_t) log((1-\hat{y_t}))$ - Null Hypothesis H(0): accuracy < 80%
- Create Git
- Explore/Understand/Convert data
- Separate signal bands and then do power spectra (Xander, Mandar) https://neurodsp-tools.github.io/neurodsp/auto_examples/plot_mne_example.html#sphx-glr-auto-examples-plot-mne-example-py
- Brain channel visualization that matches anatomically (MNE: https://mne.tools/dev/auto_tutorials/epochs/index.html, (https://mne.tools/stable/auto_tutorials/clinical/30_ecog.html#:~:text=MNE%20supports%20working%20with%20more,with%20electrocorticography%20(ECoG)%20data )(Everyone)
- EDA
- inspiration (Mandar)
-
GLM analysis as done in fMRI? primer on glm, glm step-by-step, full example - with decoding
- y_i = c * X * b_i
- y_i is the signal recorded from electrode
i
of shape (t_points
, ) - X is design matrix of shape (
t_points
,event_ids
), ideally should be the signal from data gloves - c is the contrast vector of shape (,
event_ids
) - b_i is the scalar weight for the corresponding electrode
- contrast
hand
-tongue
meaning c is [0 -1 1] (??) - optimise betas in B, i.e. the weights corresponding to each electrode, showing which electrodes were most active during
hand
events relative totongue
events - get t-scores corresponding to each beta, showing if and how significant was the difference
- convert these t-scores to z-scores
-
Decoding:
- use z-scores from the encoding step to classify between events
- Why model? There are two possible explanations (recruiting v.s. firing rate incr.)
- Is there a model that can address this hypothesis? It would address the recurrent relationship.
- Do we have the spatial or temporal resolution to address this?
- If we could show : recurrent network
how about predicting when? autoregression models and premotor cortex and planning. predicting why changes happen? is it learning? priming v. habituation. maybe# nma_motor_imagery