/NPPrompt

[ACL 2023] NPPrompt: Pre-trained Language Models Can be Fully Zero-Shot Learners

Primary LanguagePythonMIT LicenseMIT

NPPrompt

[ACL 2023] NPPrompt: Pre-trained Language Models Can be Fully Zero-Shot Learners

We are currently refining the code to align with state-of-the-art Language Models (LLMs). Stay tuned for updates!

We use the dataset in OpenPrompt.

Download the dataset

cd datasets
bash download_text_classification.sh

[Update] The link in the download_text_classification.sh script is no longer valid. The required files can now be downloaded from this Google Drive link: https://drive.google.com/file/d/1i98mgI93Qy6KHqG4gIlxnVExjfiGi7ss/view?usp=sharing

Run the code

bash example_run.sh

Citation

Please cite our paper if you find NPPrompt useful for your research:

@article{zhao2022pre,
  title={Pre-trained Language Models Can be Fully Zero-Shot Learners},
  author={Zhao, Xuandong and Ouyang, Siqi and Yu, Zhiguo and Wu, Ming and Li, Lei},
  journal={arXiv preprint arXiv:2212.06950},
  year={2022}
}