/Chain-of-ThoughtsPapers

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Chain-of-ThoughtsPapers

A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".

Papers

  1. Chain of Thought Prompting Elicits Reasoning in Large Language Models.

    Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou [pdf] 2022.1

  2. Self-Consistency Improves Chain of Thought Reasoning in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [pdf] 2022.3

  3. STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning.

    Eric Zelikman, Yuhuai Wu, Noah D. Goodman [pdf] 2022.3

  4. PaLM: Scaling Language Modeling with Pathways.

    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, Noah Fiedel [pdf] 2022.4

  5. Can language models learn from explanations in context?.

    Andrew K. Lampinen, Ishita Dasgupta, Stephanie C. Y. Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L. McClelland, Jane X. Wang, Felix Hill [pdf] 2022.4

  6. Inferring Implicit Relations with Language Models.

    Uri Katz, Mor Geva, Jonathan Berant [pdf] 2022.4

  7. The Unreliability of Explanations in Few-Shot In-Context Learning.

    Xi Ye, Greg Durrett [pdf] 2022.5

  8. Large Language Models are Zero-Shot Reasoners.

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa [pdf] 2022.5

  9. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models.

    Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi [pdf] 2022.5

  10. Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning.

    Antonia Creswell, Murray Shanahan, Irina Higgins [pdf] 2022.5

  11. On the Advance of Making Language Models Better Reasoners.

    Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen [pdf] 2022.6

  12. Emergent Abilities of Large Language Models.

    Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus [pdf] 2022.6

  13. Minerva: Solving Quantitative Reasoning Problems with Language Models.

    Posted by Ethan Dyer and Guy Gur-Ari, Research Scientists, Google Research, Blueshift Team [blog] 2022.6

  14. JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem Understanding.

    Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen [pdf] 2022.6

  15. A Dataset and Benchmark for Automatically Answering and Generating Machine Learning Final Exams

    Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, Iddo Drori [pdf] 2022.6

  16. Rationale-Augmented Ensembles in Language Models.

    Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou [pdf] 2022.7

  17. Language Model Cascades.

    David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-dickstein, Kevin Murphy, Charles Sutton [pdf] 2022.7

  18. Text and Patterns: For Effective Chain of Thought, It Takes Two to Tango.

    Aman Madaan, Amir Yazdanbakhsh [pdf] 2022.9

  19. Compositional Semantic Parsing with Large Language Models.

    Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou [pdf] 2022.9

  20. Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning.

    Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, Ashwin Kalyan [pdf] 2022.9

  21. Language Models are Multilingual Chain-of-Thought Reasoners.

    Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei [pdf] 2022.10

  22. Automatic Chain of Thought Prompting in Large Language Models.

    Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola [pdf] 2022.10

  23. Binding Language Models in Symbolic Languages.

    Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu [pdf] 2022.10

  24. ReAct: Synergizing Reasoning and Acting in Language Models.

    Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao [pdf] 2022.10

  25. Ask Me Anything: A simple strategy for prompting language models.

    Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré [pdf], [code] 2022.10

  26. Language Models of Code are Few-Shot Commonsense Learners.

    Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, Graham Neubig [pdf], [code] 2022.10

  27. Large Language Models Can Self-Improve.

    Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, Jiawei Han [pdf] 2022.10

  28. Large Language Models are few(1)-shot Table Reasoners.

    Wenhu Chen [pdf] 2022.10

  29. PAL: Program-aided Language Models.

    Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, Graham Neubig [pdf] 2022.11

  30. Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks.

    Wenhu Chen, Xueguang Ma, Xinyi Wang, William W. Cohen [pdf] 2022.11

  31. Reasoning with Language Model Prompting: A Survey.

    Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen [pdf] 2022.12

  32. Large Language Models are reasoners with Self-Verification.

    Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, Jun Zhao [pdf] [code] 2022.12

  33. Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning

    Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, Yongbin Li [pdf] 2023.02