Notable differences in Guilherme's approach
Closed this issue · 4 comments
He uses scipy.linalg.eigh in whiten(), we use scipy.linalg.eig. Not sure how it's different yet
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh
https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html
He does use the regularization factor stated in Negro et al. (2016) in his whiten
function:
# Regularization
reg_fact = d[:round(len(d)/2)].mean()
Not entirely sure but I believe this is how they get around the division by zero problem:
"The eigenvalue decomposition has been per- formed in this study with a regularization factor fixed for all recordings to the average of the smallest half of the eigen- values of the correlation matrix of the extended EMG signals. The regularization procedure was applied to reduce the numerical instability of the solutions of the inverse problem."
EDIT: He calculates this regularization factor but forgets to apply it, looks like an error? Also he tries to apply it by replacing all of the smallest half of the eigenvalues with the regularization factor, when I think we should add it to them.
His subtract_mean
function doesn't return anything because it modifies the input x. He centres AFTER extending which means his 0's change values.
His extend
function doesn't return a M x (R+1) x K
matrix but returns a M(R+1) x K
matrix. All channels are put together in a single dimension whereas we separate the channels. I believe his implementation makes more sense as we compare all rows against each other when whitening (we don't treat rows differently depending on what channel they belong to). np.cov
doesn't work when the input has more than two dimensions, which means we need to change our implementation.