Code of paper "AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension".
torch==1.7.1
dgl-cu101==0.6.1
stanza==1.2.3
transformers==4.5.0
networkx
nltk
scikit-learn
pylev
We use Graphene to extract EDUs. We put all the contexts and options line by line in a .txt file and follow the instructions of Graphene to get EDUs outputs. Or you can also use our preprocessed file under directory ReclorDataset/LogiDataset. We also provide cached file of preprocessed datas on Google Drive. Download and put them under directory ReclorDataset/LogiQADataset.
Checkpoints can be accessed on Google Drive.
export MODE=eval_only
bash scripts/LogiGraph_Roberta.sh /PATH/TO/RECLOR/CHECKPOINTS ## ReClor evaluation
bash scripts/LogiGraph_Roberta_LogiQA.sh /PATH/TO/LOGIQA/CHECKPOINTS ## LogiQA evaluation
For ReClor dataset, we submit prediction file on ReClor Leaderboard and AdaLoGN achieves Rank #10 on leaderboard (03/15/2022).
You can also install wandb and set export WANDB_DISABLED=false
in training scripts to visualize the training process.
export MODE=do_train
bash scripts/LogiGraph_Roberta.sh /PATH/TO/ROBERTA/LARGE ## ReClor training
bash scripts/LogiGraph_Roberta_LogiQA.sh /PATH/TO/ROBERTA/LARGE ## LogiQA training
Please cite this paper kindly in your publications if it helps your research.
@inproceedings{li2022adalogn,
title={AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension},
author={Li, Xiao and Cheng, Gong and Chen, Ziheng and Sun, Yawei and Qu, Yuzhong},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={7147--7161},
year={2022}
}