- 2022-12-19 We release a new survey paper:"Reasoning with Language Model Prompting: A Survey" based on this repository! We are looking forward to any comments or discussions on this topic :)
- 2022-09-14 We create this repository to maintain a paper list on Reasoning with Language Model Prompting.
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc. This paper provides a comprehensive survey of cutting-edge research on reasoning with language model prompting. We introduce research works with comparisons and summaries and provide systematic resources to help beginners. We also discuss the potential reasons for emerging such reasoning abilities and highlight future research directions.
-
Reasoning with Language Model Prompting: A Survey.
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen. [abs], 2022.12
-
Towards Reasoning in Large Language Models: A Survey.
Jie Huang, Kevin Chen-Chuan Chang. [abs], 2022.12
-
A Survey of Deep Learning for Mathematical Reasoning.
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, Kai-Wei Chang. [abs], 2022.12
-
A Survey for In-context Learning.
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, Zhifang Sui. [abs], 2022.12
-
Prompting Contrastive Explanations for Commonsense Reasoning Tasks.
Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi. [abs], 2021.6
-
Template Filling for Controllable Commonsense Reasoning.
Dheeraj Rajagopal, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman, Andrew Fano, Eduard Hovy. [abs], 2021.11
-
Chain of Thought Prompting Elicits Reasoning in Large Language Models.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, Denny Zhou. [abs], 2022.1
-
Large Language Models are Zero-Shot Reasoners.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa. [abs], 2022.5
-
Psychologically-informed chain-of-thought prompts for metaphor understanding in large language models.
Ben Prystawski, Paul Thibodeau, Noah Goodman. [abs], 2022.9
-
Complexity-based Prompting for Multi-step Reasoning.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot. [abs], 2022.10
-
Language Models are Multilingual Chain-of-thought Reasoners.
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei. [abs], 2022.10
-
Automatic Chain of Thought Prompting in Large Language Models.
Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola. [abs], 2022.10
-
Large Language Models are few(1)-shot Table Reasoners.
Wenhu Chen. [abs], 2022.10
-
Teaching Algorithmic Reasoning via In-context Learning.
Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, Hanie Sedghi. [abs], 2022.11
-
Iteratively Prompt Pre-trained Language Models for Chain of Thought.
Boshi Wang, Xiang Deng, Huan Sun. [abs], 2022.3
-
Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.
Antonia Creswell, Murray Shanahan, Irina Higgins. [abs], 2022.5
-
Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi. [abs], 2022.5
-
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations.
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi. [abs], 2022.5
-
Faithful Reasoning Using Large Language Models.
Antonia Creswell, Murray Shanahan. [abs], 2022.8
-
Compositional Semantic Parsing with Large Language Models.
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou. [abs], 2022.9
-
Decomposed Prompting: A Modular Approach for Solving Complex Tasks.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish Sabharwal. [abs], 2022.10
-
Measuring and Narrowing the Compositionality Gap in Language Models.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis. [abs], 2022.10
-
Successive Prompting for Decomposing Complex Questions.
Dheeru Dua, Shivanshu Gupta, Sameer Singh, Matt Gardner. [abs], 2022.12
-
The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning.
Hanlin Zhang, Yi-Fan Zhang, Li Erran Li, Eric Xing. [abs], 2022.12
-
LAMBADA: Backward Chaining for Automated Reasoning in Natural Language.
Seyed Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, Deepak Ramachandran. [abs], 2022.12
-
Iterated Decomposition: Improving Science Q&A by Supervising Reasoning Processes.
Justin Reppert, Ben Rachbach, Charlie George, Luke Stebbing, Jungwon Byun, Maggie Appleton, Andreas Stuhlmüller. [abs], 2023.1
-
Reframing Human-AI Collaboration for Generating Free-Text Explanations.
Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi. [abs], 2021.12
-
The Unreliability of Explanations in Few-Shot In-Context Learning.
Xi Ye, Greg Durrett. [abs], 2022.5
-
Self-Consistency Improves Chain of Thought Reasoning in Language Models.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou. [abs], 2022.3
-
On the Advance of Making Language Models Better Reasoners.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen. [abs], 2022.6
-
Complexity-based Prompting for Multi-step Reasoning.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot. [abs], 2022.10
-
Large Language Models are reasoners with Self-Verification.
Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao. [abs], 2022.12
-
STaR: Bootstrapping Reasoning With Reasoning.
Eric Zelikman, Yuhuai Wu, Noah D. Goodman. [abs], 2022.3
-
Large Language Models Can Self-Improve.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han. [abs], 2022.10
-
Mind's Eye: Grounded Language Model Reasoning through Simulation.
Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai. [abs], 2022.10
-
Language Models of Code are Few-Shot Commonsense Learners.
Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig. [abs], 2022.10
-
PAL: Program-aided Language Models.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig. [abs], 2022.11
-
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.
Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen. [abs], 2022.11
-
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning.
Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li. [abs], 2023.02
-
Generated Knowledge Prompting for Commonsense Reasoning.
Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi. [abs], 2021.10
-
Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering.
Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi. [abs], 2022.10
-
Explanations from Large Language Models Make Small Reasoners Better.
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, Xifeng Yan. [abs], 2022.10
-
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales.
Peifeng Wang, Aaron Chan, Filip Ilievski, Muhao Chen, Xiang Ren. [abs], 2022.11
-
TSGP: Two-Stage Generative Prompting for Unsupervised Commonsense Question Answering.
Yueqing Sun, Yu Zhang, Le Qi, Qi Shi. [abs], 2022.11
-
Distilling Multi-Step Reasoning Capabilities of Large Language Models into Smaller Models via Semantic Decompositions.
Kumar Shridhar, Alessandro Stolfo, Mrinmaya Sachan. [abs], 2022.12
-
Teaching Small Language Models to Reason.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn. [abs], 2022.12
-
Large Language Models Are Reasoning Teachers.
Namgyu Ho, Laura Schmid, Se-Young Yun. [abs], 2022.12
-
Specializing Smaller Language Models towards Multi-Step Reasoning.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar Khot. [abs], 2023.1
-
LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning.
Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, Xiaodan Liang. [abs], 2022.5
-
Selective Annotation Makes Language Models Better Few-Shot Learners.
Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu. [abs], 2022.9
-
Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan. [abs], 2022.9
-
Rethinking with Retrieval: Faithful Large Language Model Inference.
Hangfeng He, Hongming Zhang, Dan Roth. [abs], 2023.1
-
Language Model Cascades.
David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton. [abs], 2022.7
-
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering.
Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, Ashwin Kalyan. [abs], 2022.9
-
Scaling Instruction-Finetuned Language Models.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, Jason Wei. [abs], 2022.10
-
See, Think, Confirm: Interative Prompting Between Vision and Language Models for Knowledge-based Visual Reasoning.
Zhenfang Chen, Qinhong Zhou, Yikang Shen, Yining Hong, Hao Zhang, Chuang Gan. [abs], 2023.1
-
Can language models learn from explanations in context?
Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill. [abs], 2022.4
-
Emergent Abilities of Large Language Models.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. [abs], 2022.6
-
Language models show human-like content effects on reasoning.
Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Kumaran, James L. McClelland, Felix Hill. [abs], 2022.7
-
Rationale-Augmented Ensembles in Language Models.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou. [abs], 2022.7
-
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts.
Joel Jang, Seongheyon Ye, Minjoon Seo. [abs], 2022.9
-
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei. [abs], 2022.10
-
Language Models are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-thought.
Abulhair Saparov, He He. [abs], 2022.10
-
Knowledge Unlearning for Mitigating Privacy Risks in Language Models.
Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo. [abs], 2022.10
-
Emergent Analogical Reasoning in Large Language Models.
Taylor Webb, Keith J. Holyoak, Hongjing Lu. [abs], 2022.12
-
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters.
Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun. [abs], 2022.12
-
On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning.
Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, Diyi Yang. [abs], 2022.12
-
Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model.
Parishad BehnamGhader, Santiago Miret, Siva Reddy. [abs], 2022.12
-
Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers.
Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei. [abs], 2022.12
-
Dissociating language and thought in large language models: a cognitive perspective.
Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko. [abs], 2023.1
Reasoning Skills | Benchmarks |
---|---|
Arithmetic Reasoning | GSM8K, SVAMP, ASDiv, AQuA-RAT, MAWPS, AddSub, MultiArith, SingleEq, SingleOp |
Commonsense Reasoning | CommonsenseQA, StrategyQA, ARC, SayCan, BoolQA, HotpotQA, OpenBookQA, PIQA |
Symbolic Reasoning | Last Letter Concatenation, Coin Flip, Reverse List |
Logical Reasoning | ProofWriter, EntailmentBank, RuleTaker, CLUTRR |
Multimodal Reasoning | SCIENCEQA |
Others | BIG-bench, SCAN |
- ThoughtSource: a central, open resource for data and tools related to chain-of-thought reasoning in LLMs.
- LangChain: a library designed to help developers build applications using LLMs combined with other sources of computation or knowledge.
- LogiTorch: a PyTorch-based library for logical reasoning on natural language.
- λprompt: a library that allows for building a full large LM-based prompt machines, including ones that self-edit to correct and even self-write their own execution code.
- Add a new paper or update an existing paper, thinking about which category the work should belong to.
- Use the same format as existing entries to describe the work.
- Add the abstract link of the paper (
/abs/
format if it is an arXiv publication). - A very brief explanation why you think a paper should be added or updated is recommended.
Don't worry if you put something wrong, they will be fixed for you. Just contribute and promote your awesome work here!
If you find this survey useful for your research, please consider citing
@article{qiao2022reasoning,
title={Reasoning with Language Model Prompting: A Survey},
author={Qiao, Shuofei and Ou, Yixin and Zhang, Ningyu and Chen, Xiang and Yao, Yunzhi and Deng, Shumin and Tan, Chuanqi and Huang, Fei and Chen, Huajun},
journal={arXiv preprint arXiv:2212.09597},
year={2022}
}