Yoichi Ishibashi, Danushka Bollegala, Katsuhito Sudoh, Satoshi Nakamura: Evaluating the Robustness of Discrete Prompts (EACL 2023)
Install the required packages.
pip install -r requirements.txt
Our experiment is divided into two phases (1) prompt learning (2) analyzing the robustness of the learned prompts.
- Learning prompt tokens by AutoPrompt (AP).
cd ap
sh ap_label-token-search.sh
sh ap_trigger-token-search.sh
- Fine-tuning PLM by Manually-written Prompts (MP).
cd mp
sh mp_finetuning.sh
- Evaluating the robustness of LM prompt The following scripts perform the four robustness evaluations of LM prompts.
AutoPrompt (AP)
cd ap
sh ap_run-all-robust-eval.sh
Manually-written Prompts (MP)
cd mp
sh mp_run-all-robust-eval.sh
We created the adversarial NLI dataset (see Sec 3.5 Adversarial Perturbations in our paper). These datasets were used for the prompt robustness evaluation described above.
data/superglue/cb/perturbation-label-change.tsv
data/superglue/cb/perturbation-label-no-change.tsv
data/superglue/mnli/perturbation-label-change.tsv
data/superglue/mnli/perturbation-label-no-change.tsv
@inproceedings{Ishibashi:EACL:2023,
author = {Yoichi Ishibashi and Danushka Bollegala and Katsuhito Sudoh and Satoshi Nakamura},
title = {Evaluating the Robustness of Discrete Prompts},
booktitle = {Proc. of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023)},
year = {2023}
}