1Zhejiang University, 2DAMO Academy, Alibaba Group, 3Worcester Polytechnic Institute
†Corresponding Author
This repo contains the PyTorch implementation of Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization for Few-shot Generalization, which is accepted by EMNLP 2023 (findings).
This repos is built based on the repo of PERFECT. Please refer to facebookresearch/perfect for the installation of the Python environment.
- pretrain_prompt/gradient_edit_t5base.pt: checkpoint of the meta-trained gradient regularization function for t5-base.
- pretrain_prompt/prompt_t5base.pt: checkpoint of the meta-trained soft prompt for t5-base.
- pretrain_prompt/gradient_edit_flant5xl.pt: checkpoint of the meta-trained gradient regularization function for flant5xl.
- pretrain_prompt/prompt_flant5xl.pt: checkpoint of the meta-trained soft prompt for flant5xl.
To run the code of meta-training and downstream prompt-tuning, you can refer to the scripts provided at scripts/meta-train.sh and scripts/prompt-tuning.sh.
Our project is developed based on the following repositories:
-
Perfect: Prompt-free and Efficient Few-shot Learning with Language Models
-
PPT: Pre-trained Prompt Tuning for Few-shot Learning
If you found this work useful, please consider citing our paper as follows:
@article{pan2023self,
title={Self-supervised Meta-Prompt Learning with Meta-Gradient Regularization for Few-shot Generalization},
author={Pan, Kaihang and Li, Juncheng and Song, Hongye and Lin, Jun and Liu, Xiaozhong and Tang, Siliang},
journal={arXiv preprint arXiv:2303.12314},
year={2023}
}