Question about text prototype in reprogramming
Closed this issue · 3 comments
ztb-35 commented
celestialxevermore commented
It seems like you've caught on quite well. The authors mentioned in the paper, "A simple solution is to maintain a small collection of text prototypes by linearly probing E, denoted as E'." The part you mentioned about the mapping_layer likely corresponds to this.
ztb-35 commented
I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").
kwuking commented
I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").
Yes, your understanding is correct. We have also provided a detailed description in the "Patch Reprogramming" section of our paper, which you can refer to.