KimMeen/Time-LLM

Question about text prototype in reprogramming

Closed this issue · 3 comments

Hi, there. Thanks for publishing your code. I'm interested in your patch reprogramming. But I didn't find your text prototype in your code TimeLLM.py
image
This is only the linear projection. I'm not sure if I misunderstand the code. Thanks for your reply.

It seems like you've caught on quite well. The authors mentioned in the paper, "A simple solution is to maintain a small collection of text prototypes by linearly probing E, denoted as E'." The part you mentioned about the mapping_layer likely corresponds to this.

I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").

I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").

Yes, your understanding is correct. We have also provided a detailed description in the "Patch Reprogramming" section of our paper, which you can refer to.