This repository is the official implementation of PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks.
To install requirements:
conda create --name exp --file requirements.txt
To obtain C4 realnewslike
split, please run:
python get_large_pre_training_c4_data.py
We need to do the Prompt Pre-training, please run:
python pre_train_t5.py --config model_config/pre_train_keyword_pt.yml --serialization-dir pretrain_web_page_keyword_t5_short --train
In a Nvidia A100 GPU, it takes about 24 hours to complete the pre-training.
We also provide the checkpoint that we used in the paper here. Please put the file folder pretrain_web_page_keyword_t5_short
in the root directory of this project.
To run the full data augmentation experiments, please follow below instructuins:
To run the wikiann experiments on the first GPU under the shot-10 setting using random seed 18
, please run
bash script/run_few_shot_bert_prefix.sh 0 18 10 wikiann 1000
To run the sst2 experiments on the first GPU under the shot-10 setting using random seed 18
, please run
bash script/run_few_shot_bert_prefix_sen_cls.sh 0 18 10 sst2 1000