This study intends to disentangle pure logic reasoning and text understanding by investigating the contrast across abstract and contextualized logical problems from a comprehensive set of domains. We explore whether LLMs demonstrate genuine reasoning capabilities across various domains when the underlying logical structure remains constant. We focus on two main questions:
(1) Can abstract logical problems alone accurately benchmark LLMs' reasoning ability in real-world scenarios, disentangled from contextual support in practical settings?
(2) Does fine-tuning LLMs on abstract logic problems generalize to contextualized logic problems and vice versa?
To investigate these questions, we focus on standard propositional logic, specifically propositional deductive and abductive logic reasoning. In particular, we construct instantiated datasets for deductive and abductive reasoning with
-
Create formal logic template from DyVal
-
Instantiate variables to meaningful sentences
-
Compile sentences into passage based on logic template
You can find the instantiated logic problems in the below directory, where {level_number} can be 1, 2, 3, 4, and {logic_type} can be "deductive_logic" and "abductive_logic".
contexthub/data/data_level{level_number}/{logic_type}.json
Data with groundtruth chain-of-thought reasoning can be found in the below directory:
contexthub/data/data_level{level_number}/{logic_type}_traincot.json
conda create --name contexthub python=3.9
conda activate contexthub
pip install -r requirements.txt
Use the below command to run the evaluation, where {model} can be qwen-0.5...qwen-110, yi-6...yi-34, llama-7...llama-72.
python evaluate.py --model {model} --dataset {logic_type} --level {level_number} --seed 42
You can also find other commands in:
run_evaluate.sh
python compute_score.py --dataset {logic_type} --level {level_number} --model {model}
You can also find other commands in:
run_compute_score.sh
The source code of ContextHub is licensed under Apache 2.0. The intended purpose is solely for research use.