brain-networks/edge-centric_demo

question about the memory intensive for computing the eFC

Closed this issue · 1 comments

Hi, everyone

I was a little confused that I try to use this code to compute the edge FC for further classification analysis, However I notice that the exemple code just compute the edge community similarity and normalized entropy based on the edge time series. and it seems not very efficent to compute the eFC (it may need a lot of memory). Do you have any advice to overcome this problem or can I use the edge community similarity and normalized entropy instead of the eFC?

efc is fundamentally "bigger" than nfc because its dimensions are edge x edge rather than node x node. As you've noted, this can make it challenging to apply to large datasets due to memory issues. You've correctly highlighted one possible strategies: to focus on lower-dimensional derivatives of efc, e.g. entropy and edge community similarity. In unpublished work we've compared efc across hundreds of subjects from human connectome project piecemeal by generating efc among smaller subsets of edges and repeating this process until we've done so for all edge pairs (the either efc matrix).

More generally, how to deal with increased dimensionality of eFC in an efficient way is an open question.