Update: We have released an open-source prompt-learning toolkit, check out OpenPrompt!
Must-read papers on prompt-based tuning for pre-trained language models. The paper list is mainly mantained by Ning Ding and Shengding Hu.
This is a paper list about prompt-based tuning for large-scale pre-trained language models. Different from traditional fine-tuning that uses an explicit classifier, prompt-based tuning directly uses the pre-trained models to conduct the pre-training tasks for classification or regression.
-
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Preprint.
Liu, Pengfei, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. [pdf] [project], 2021.7
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. [pdf], [project], (T5). 2019.10.
-
Parameter-Efficient Transfer Learning for NLP. ICML 2019.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly. [pdf], [project], 2019.6
-
How Can We Know What Language Models Know? TACL 2020.
Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig. [pdf], [project], 2019.11
-
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference. EACL 2021.
Timo Schick, Hinrich Schütze. [pdf], [project] (PET), 2020.1
-
Language Models are Few-shot Learners. NeurIPS 2020.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei. [pdf], [website] (GPT-3), 2020.5
-
It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. NAACL 2021.
-
Autoprompt: Eliciting knowledge from language models with automatically generated prompts. Preprint.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh. [pdf], [website] (AutoPrompt), 2020.10
-
Automatically Identifying Words That Can Serve as Labels for Few-Shot Text Classification. COLING 2020.
Timo Schick, Helmut Schmid, Hinrich Schütze [pdf], [project], 2020.12
-
Making Pre-trained Language Models Better Few-shot Learners. ACL 2021.
Tianyu Gao, Adam Fisch, Danqi Chen. [pdf], [project] (LM-BFF), 2020.12
-
Prefix-tuning: Optimizing continuous prompts for generation. ACL 2021.
-
Calibrate Before Use: Improving Few-Shot Performance of Language Models. Preprint.
Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh. [pdf], [project], 2021.2
-
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm. Preprint.
Laria Reynolds, Kyle McDonell. [pdf], 2021.2
-
Improving and Simplifying Pattern Exploiting Training. Preprint.
Derek Tam, Rakesh R Menon, Mohit Bansal, Shashank Srivastava, Colin Raffel. [pdf], 2021.3
-
GPT understands, too. Preprint.
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang. [pdf], [project] (P-tuning), 2021.3
-
The Power of Scale for Parameter-Efficient Prompt Tuning. Preprint.
Brian Lester, Rami Al-Rfou, Noah Constant. [pdf], [implementation], 2021.4
-
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts. NAACL 2021.
-
Factual Probing Is [MASK]: Learning vs. Learning to Recall. NAACL 2021.
Zexuan Zhong, Dan Friedman, Danqi Chen. [pdf], [project], 2021.4
-
AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction. Preprint.
Xiang Chen, Xin Xie, Ningyu Zhang, Jiahuan Yan, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen. [pdf], 2021.4
-
PTR: Prompt Tuning with Rules for Text Classification. Preprint.
Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun. [pdf] (PTR), 2021.5
-
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models. Preprint.
Robert L. Logan IV, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh, Sebastian Riedel. [pdf], 2021.6
-
WARP: Word-level Adversarial ReProgramming. ACL 2021.
Karen Hambardzumyan, Hrant Khachatrian, Jonathan May. [pdf], [project], 2021.6
-
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification. Preprint.
Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Juanzi Li, Maosong Sun. [pdf], 2021.8
-
Noisy Channel Language Model Prompting for Few-Shot Text Classification. Preprint.
Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer. [pdf], 2021.8
-
Language Models as Knowledge Bases? EMNLP 2019.
Fabio Petroni, Tim Rocktaschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel. [pdf], [project] (LAMA), 2019.9
-
What Makes Good In-Context Examples for GPT-3?. Preprint.
Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen. [pdf] 2021.1
-
How Many Data Points is a Prompt Worth? NAACL 2021. Preprint.
-
Surface Form Competition-Why the Highest Probability Answer Isn’t Always Right. Preprint. Preprint.
Ari Holtzman, Peter West, Vered Schwartz, Yejin Choi, Luke Zettlemoyer. [pdf][project], 2021.4
-
Natural Instructions: Benchmarking Generalization to New Tasks from Natural Language Instructions. Preprint.
Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi. [pdf], [project] 2021.4
-
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. Preprint.
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp. [pdf] 2021.4
-
Meta-tuning Language Models to Answer Prompts Better. Preprint.
Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein. [pdf] 2021.4
-
True Few-Shot Learning with Language Models. Preprint.
Ethan Perez, Douwe Kiela, Kyunghyun Cho. [pdf], [project] 2021.5
-
Thinking Aloud: Dynamic Context Generation Improves Zero-Shot Reasoning Performance of GPT-2. Preprint.
Gregor Betz, Kyle Richardson, Christian Voigt. [pdf] 2021.3
-
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation. Preprint.
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyeong Park. [pdf] 2021.4
-
PADA: A Prompt-based Autoregressive Approach for Adaptation to Unseen Domains Preprint.
Eyal Ben-David, Nadav Oved, Roi Reichart. [pdf][project] 2021.5
-
Prompt-Learning for Fine-grained Entity Typing Preprint.
Ning Ding, Yulin Chen, Xu Han, Guangwei Xu, Pengjun Xie, Hai-Tao Zheng, Zhiyuan Liu, Juanzi Li, Hong-Gee Kim [pdf] 2021.8
We thank Yujia Qin, Xiachong Feng, Chenglei Si , Tianbao Xiefor the paper recommendation. Pull requests and issues are welcomed!