wenhui0206/NeuroGPT

Question about handling representation collapse

Aceticia opened this issue · 0 comments

Hi all, thanks for the terrific work. Reading the paper it seems strange to me that the model doesn't show any collapse. When predicting future signals in feature space, the easiest solution is to let the encoder always output 0. JEPA handles this by using EMA, and some other works handle this by using variance regularization. But interestingly, in your work I don't see any description of preventing this - did you observe no collapse in the representation at all?