Code for NAACL 2022 paper Answer Consolidation: Formulation and Benchmarking.
- PyTorch
- Transformers
- wandb
- tqdm
- scikit-learn
Finetune the sentence embedding models with the following command:
>> python main.py --model_name_or_path $MODEL --format sentence_embedding
where $MODEL can be ['roberta-large', 'sentence-transformers/all-roberta-large-v1', 'princeton-nlp/sup-simcse-roberta-large']
.
Finetune the NLI models with the following command:
>> python main.py --model_name_or_path $MODEL --format nli
where $MODEL can be ['roberta-large', 'roberta-large-mnli']
.
Evaluation results on the development and test sets are synced to the wandb darshboard every epoch.