/P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Primary LanguagePythonMIT LicenseMIT

P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

Xiao Liu*, Yanan Zheng*, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang

You may be also interested in our another work GLM: All NLP Tasks Are Generation Tasks: A General Pretraining Framework

How to use our code

We have released the code and datasets for LAMA and few-shot SuperGLUE (32-dev) experiments. Please check README.md and requirement.txt in the corresponding subdirectories for details.

The LAMA and FewGLUE_32dev datasets are available. The LAMA dataset should be placed in ./data directory, and the SuperGLUE dataset should be placed in the ./ (project root) directory.

Citation

If you find our work useful, please cite the following paper:

@article{liu2021gpt,
  title={GPT Understands, Too}, 
  author={Xiao Liu and Yanan Zheng and Zhengxiao Du and Ming Ding and Yujie Qian and Zhilin Yang and Jie Tang},
  year={2021},
  journal={arXiv preprint arXiv:2103.10385},
  url={https://arxiv.org/abs/2103.10385}
}