A curated list of in-context-learning, including classic and up-to-date papers. This project will be constantly updated and improved.
Keyword Explaination
: Classic papers in the field for those who want a quick overview of the field.
-
MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action (2023.03.20) [pdf]
-
Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models (2023.03.08) [pdf]
-
What Makes Good Examples for Visual In-Context Learning? (2023.01.31) [pdf]
-
Multimodal Chain-of-Thought Reasoning in Language Models (2023.01.17) [pdf]
-
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language (2022.04.01) [pdf]
-
Multimodal Few-Shot Learning with Frozen Language Models (2021.06.25) [pdf]
-
Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings (2023.05.03) [pdf]
-
Chain of Thought Prompt Tuning in Vision Language Models (2023.04.16) [pdf]
-
What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning (2023.05.16) [pdf]
-
Symbol tuning improves in-context learning in language models (2023.05.15) [pdf]
-
Larger language models do in-context learning differently (2023.03.07) [pdf]
-
Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning (2023.02.28) [pdf]
-
Transformers as Algorithms: Generalization and Stability in In-context Learning (2023.01.17) [pdf]
-
Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers (2022.12.20) [pdf]
-
Transformers learn in-context by gradient descent (2022.12.15) [pdf]
-
What learning algorithm is in-context learning? Investigations with linear models (2022.11.28) [pdf]
-
In-context Learning and Induction Heads (2022.09.24) [pdf]
-
Data Distributional Properties Drive Emergent In-Context Learning in Transformers (2022.04.22) [pdf]
-
An Explanation of In-context Learning as Implicit Bayesian Inference (2021.11.03) [pdf]
-
Active Prompting with Chain-of-Thought for Large Language Models (2023.02.23) [pdf]
-
Faithful Chain-of-Thought Reasoning (2023.01.31) [pdf]
-
Automatic Chain of Thought Prompting in Large Language Models (2022.10.07) [pdf]
-
Large Language Models are Zero-Shot Reasoners (2022.05.24) [pdf]
-
Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022.03.21) [pdf]
-
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022.01.28) [pdf]
-
Large Language Models Can Be Easily Distracted by Irrelevant Context (2023.01.23) [pdf]
-
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters (2022.12.20) [pdf]
-
Large Language Models are Better Reasoners with Self-Verification (2022.12.19) [pdf]
If you think there are still papers worth reading, other useful resources, or any other things, feel free to contribute!