This is the collection of papers related to large language models for information retrieval. These papers are organized according to our survey paper Large Language Models for Information Retrieval: A Survey.
Feel free to contact us if you find a mistake or have any advice. Email: yutaozhu94@gmail.com and dou@ruc.edu.cn.
Please kindly cite our paper if helps your research:
@article{LLM4IRSurvey,
author={Yutao Zhu and
Huaying Yuan and
Shuting Wang and
Jiongnan Liu and
Wenhan Liu and
Chenlong Deng and
Zhicheng Dou and
Ji-Rong Wen},
title={Large Language Models for Information Retrieval: A Survey},
journal={CoRR},
volume={abs/2308.07107},
year={2023},
url={https://arxiv.org/abs/2308.07107},
eprinttype={arXiv},
eprint={2306.07401}
}
- Query2doc: Query Expansion with Large Language Models, Wang et al., arXiv 2023. [Paper]
- Generative and Pseudo-Relevant Feedback for Sparse, Dense and Learned Sparse Retrieval, Mackie et al., arXiv 2023. [Paper]
- Generative Relevance Feedback with Large Language Models, Mackie et al., SIGIR 2023 (short paper). [Paper]
- GRM: Generative Relevance Modeling Using Relevance-Aware Sample Estimation for Document Retrieval, Mackie et al., arXiv 2023. [Paper]
- Large Language Models Know Your Contextual Search Intent: A Prompting Framework for Conversational Search, Mao et al., arXiv 2023. [Paper]
- Precise Zero-Shot Dense Retrieval without Relevance Labels, Gao et al., ACL 2023. [Paper]
- Query Expansion by Prompting Large Language Models, Jagerman et al., arXiv 2023. [Paper]
- Large Language Models are Strong Zero-Shot Retriever, Shen et al., arXiv 2023. [Paper]
- Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting, Ye et al., EMNLP 2023 Findings. [Paper]
- QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation, Srinivasan et al., EMNLP 2022 (Industry). [Paper] (This paper explore fine-tuning methods in baseline experiments.)
- QUILL: Query Intent with Large Language Models using Retrieval Augmentation and Multi-stage Distillation, Srinivasan et al., EMNLP 2022 (Industry). [Paper]
- Knowledge Refinement via Interaction Between Search Engines and Large Language Models, Feng et al., arXiv 2023. [Paper]
- Query Rewriting for Retrieval-Augmented Large Language Models, Ma et al., arXiv 2023. [Paper]
- InPars: Data Augmentation for Information Retrieval using Large Language Models, Bonifacio et al., arXiv 2022. [Paper]
- InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval, Jeronymo et al., arXiv 2023. [Paper]
- Promptagator: Few-shot Dense Retrieval From 8 Examples, Dai et al., ICLR 2023. [Paper]
- AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation, Meng et al., arXiv 2023. [Paper]
- UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers, Saad-Falco et al., arXiv 2023. [Paper]
- Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models, Peng et al., arXiv 2023. [Paper]
- Questions Are All You Need to Train a Dense Passage Retriever, Sachan et al., ACL 2023. [Paper]
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators, Chen et al., EMNLP 2023. [Paper]
- Text and Code Embeddings by Contrastive Pre-Training, Neelakantan et al., arXiv 2022. [Paper]
- Large Dual Encoders Are Generalizable Retrievers, Ni et al., ACL 2022. [Paper]
- Task-aware Retrieval with Instructions, Asai et al., ACL 2023 (Findings). [Paper]
- Transformer memory as a differentiable search index, Tay et al., NeurIPS 2022. [Paper]
- Large Language Models are Built-in Autoregressive Search Engines, Ziems et al., ACL 2023 (Findings). [Paper]
- Document Ranking with a Pretrained Sequence-to-Sequence Model, Nogueira et al., EMNLP 2020 (Findings). [Paper]
- Text-to-Text Multi-view Learning for Passage Re-ranking, Ju et al., SIGIR 2021 (Short Paper). [Paper]
- The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models, Pradeep et al., arXiv 2021. [Paper]
- RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses, Zhuang et al., SIGIR 2023 (Short Paper). [Paper]
- Holistic Evaluation of Language Models, Liang et al., arXiv 2022. [Paper]
- Improving Passage Retrieval with Zero-Shot Question Generation, Sachan et al., EMNLP 2022. [Paper]
- Discrete Prompt Optimization via Constrained Generation for Zero-shot Re-ranker, Cho et al., ACL 2023 (Findings). [Paper]
- Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent, Sun et al., arXiv 2023. [Paper]
- Zero-Shot Listwise Document Reranking with a Large Language Model, Ma et al., arXiv 2023. [Paper]
- Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting, Qin et al., arXiv 2023. [Paper]
- ExaRanker: Explanation-Augmented Neural Ranker, Ferraretto et al., SIGIR 2023 (Short Paper). [Paper]
- InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers, Boytsov et al., arXiv 2023. [Paper]
- Generating Synthetic Documents for Cross-Encoder Re-Rankers, Askari et al., arXiv 2023. [Paper]
- Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent, Sun et al., arXiv 2023. [Paper]
- REALM: Retrieval-Augmented Language Model Pre-Training, Guu et al., arXiv 2020. [Paper]
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Lewis et al., NeurIPS 2020. [Paper]
- REPLUG: Retrieval-Augmented Black-Box Language Models, Shi et al., arXiv 2023. [Paper]
- Atlas: Few-shot Learning with Retrieval Augmented Language Models, Izacard et al., arXiv 2022. [Paper]
- Internet-augmented Language Models through Few-shot Prompting for Open-domain Question Answering, Lazaridou et al., arXiv 2022. [Paper]
- Rethinking with Retrieval: Faithful Large Language Model Inference, He et al., arXiv 2023. [Paper]
- RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit, Liu et al., arXiv 2023. [Paper]
- In-Context Retrieval-Augmented Language Models, Ram et al., arXiv 2023. [Paper]
- Improving Language Models by Retrieving from Trillions of Tokens, Borgeaud et al., ICML 2022. [Paper]
- Interleaving Retrieval with Chain-of-thought Reasoning for Knowledge-intensive Multi-step Questions, Trivedi et al., ACL 2023, [Paper]
- Active Retrieval Augmented Generation, Jiang et al., arXiv 2023. [Paper]
- Measuring and Narrowing the Compositionality Gap in Language Models, Press et al., arXiv 2022, [Paper]
- DEMONSTRATE–SEARCH–PREDICT: Composing Retrieval and Language Models for Knowledge-intensive NLP, Khattab et al., arXiv 2022, [Paper]
- Answering Questions by Meta-Reasoning over Multiple Chains of Thought, Yoran et al., arXiv 2023, [Paper]
- WebGPT: Browser-assisted Question-answering with Human Feedback, Nakano et al., arXiv 2021. [Paper]
- WebCPM: Interactive Web Search for Chinese Long-form Question Answering, Qin et al., ACL 2023. [Paper]