facebookresearch/brainmagick

About your newest paper BRAIN DECODING: TOWARD REAL-TIME RECONSTRUCTION OF VISUAL PERCEPTION

Zoe-Wan opened this issue · 2 comments

Hi! Sorry for asking my questions about another paper here because I can't wait for the code of that paper being publiced.
First of all, thanks for the impressive work! It performs so well that I'm sure it will kill the game. I'm trying reproduce the retrieval task but it is overfitting on valid dataset all the time. It really confuses me because I have check my model structure with the table S1, and I do not find the differences.

I suspect that I may have made some mistakes while preprocessing the MEG data. I don't know the differences of the preprocessing between BRAIN DECODING: TOWARD REAL-TIME RECONSTRUCTION OF VISUAL PERCEPTION and brainmagick, such as, the clip limit.

Besides, does the work use the default weight decay of the Adam ( weight_decay = 0.01 )? And is any data augment used?

By the way, In the Appendix A.2 of BRAIN DECODING(public on openreview), the paper said "We focus the search on the retrieval task, i.e., by setting λ = 0 in Eq. 3, and leave the selection of an optimal λ to a modelspecific sweep using a held-out set (see Section 2.3). ", Shouldn't it be "setting λ = 1" here?

The present repository is related to Défossez et al Decoding speech perception from non-invasive brain recordings.

An updated version of the paper you refer to will soon be released.