We first extract the premise and conclusion from a given argument, then sample contexts that meet the conditions, make interventions on the contexts, and finally estimate the probability of the conclusion for each unit.
The code is provided in the code/
folder.
claim_extraction.py
: extract the premise and conclusion from a given argumentcontext_sampling.py
: sample contexts that are consistent with ¬premise and ¬conclusionrevision_under_intervention.py
: make interventions on the contexts to meet the premiseprobability_estimation.py
: estimate the probability of the conclusion for each unit
We provide the data we experiment with in the 'data/' folder.
Bigbench-LFD.json
: the informal statements from the BIG-bench logical fallacy detection taskClimate.json
: arguments from climate change articles fact-checked by climate scientists (the original dataset)AAE_sampled100.json
: randomly sampled arguments from the Argument-Annotated Essays dataset
Please cite our paper if this repository inspires your work.
@article{liu2024casa,
title={CASA: Causality-driven Argument Sufficiency Assessment},
author={Liu, Xiao and Feng, Yansong and Chang, Kai-Wei},
journal={arXiv preprint arXiv:2401.05249},
year={2024}
}