Question regarding on FT-seq-Frozen
jcy132 opened this issue · 0 comments
jcy132 commented
In 'Learning to prompt for continual learning' paper, I understand 'FT-seq-Frozen' in Table 1 as a naive prompt tuning at the input token feature.
To implement the FT-seq-Frozen setting in CIFAR100, I set prompt pool_size as 1.
The result shows Acc@1 81.49 with Forgetting 6.3667.
Any point that I missed?
How did you set the hyperparamters for FT-seq-Frozen?
Specifically, did you set the argument 'train_mask = False' for FT-seq-Frozen?