/nma_motor_imagery

NMA projectwork on decoding motor behaviour from ECoG neural activity

Primary LanguageJupyter Notebook

Questions

Q. How precisely/accurately our model can classify motor behavior among - real, imagery, hand and tongue movements from ECoG neural activity?

(2. Can we use the model to show how electrodes/regions are more/less responsible for imagery v.s. real movement + hand v.s. tongue. - linear combinations of sites or non-linear (lasso) , MI, CMI - do these models explain Figure 2 (spatially) - If we train an autoregressive model we can do this for the temporal range before/during stim.)

Relevant Literature (Maybe)

  1. Classifying ECoG/EEG-Based Motor Imagery Tasks ref
    • Power spectral density is used as the features.
    • to handle redundancy : Fisher discriminant analysis (FDA) and common spatial patterns (CSP) < No backgorund on these>
    • Simple KNN model is used as classifier.
  2. < Using connected paper tool> Performance of common spatial pattern under a smaller set of EEG electrodes in brain-computer interface on chronic stroke patients: A multi-session dataset study ref
    • Using the CSP -rank mechanism selecting the subset of electrodes. ( question can this be done in the short time frame)

Ingredients

  • Input: $X_t \in \mathbf{R}^{46x3000}$, 46 channels over 1000hz for 3000 measurements. ~ 60 samples per subject.
  • Filter: $f$
    • PSD
    • Moving average
  • Model: $\theta$
  • Feedback / preprocessing: T-score from GLM (why), PCA/ICA (what model)
  • Output: $y_t$, label in the set [real,imaginary] & [hand,tongue].

Formulate mathematical hypothesis

  • Decoder model: $\theta(f(X_t)) = \hat{y}_t$
  • Loss fxn : binary crossentropy: $\frac{1}{N} \sum_{i=1}{n} y_t log(\hat{y_t}) + (1-y_t) log((1-\hat{y_t}))$
  • Null Hypothesis H(0): accuracy < 80%

Roadmap

  1. Create Git
  2. Explore/Understand/Convert data

Towards answering some questions

  1. GLM analysis as done in fMRI? primer on glm, glm step-by-step, full example - with decoding

    • y_i = c * X * b_i
    • y_i is the signal recorded from electrode i of shape (t_points, )
    • X is design matrix of shape (t_points, event_ids), ideally should be the signal from data gloves
    • c is the contrast vector of shape (, event_ids)
    • b_i is the scalar weight for the corresponding electrode
    • contrast hand - tongue meaning c is [0 -1 1] (??)
    • optimise betas in B, i.e. the weights corresponding to each electrode, showing which electrodes were most active during hand events relative to tongue events
    • get t-scores corresponding to each beta, showing if and how significant was the difference
    • convert these t-scores to z-scores
  2. Decoding:

    • use z-scores from the encoding step to classify between events

Maybe goals

  1. Why model? There are two possible explanations (recruiting v.s. firing rate incr.)
    • Is there a model that can address this hypothesis? It would address the recurrent relationship.
    • Do we have the spatial or temporal resolution to address this?
    • If we could show : recurrent network

how about predicting when? autoregression models and premotor cortex and planning. predicting why changes happen? is it learning? priming v. habituation. maybe# nma_motor_imagery