llm-hallucination-survey

Hallucination refers to the generated content that is nonsensical or unfaithful to the provided source content or even world knowledge.

This issue can hinder the real-world adoption of LLMs in various applications and scenarios.

Evaluation of Hallucination for LLMs

  1. TruthfulQA: Measuring How Models Mimic Human Falsehoods

    Stephanie Lin, Jacob Hilton, Owain Evans [paper] 2022.5

  2. Towards Tracing Factual Knowledge in Language Models Back to the Training Data

    Ekin Akyürek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu [paper] 2022.5

  3. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

    Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung [paper] 2023.2

  4. Why Does ChatGPT Fall Short in Providing Truthful Answers?

    Shen Zheng, Jie Huang, Kevin Chen-Chuan Chang [paper] 2023.4

  5. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

    Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen [paper] 2023.5

  6. Automatic Evaluation of Attribution by Large Language Models

    Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, Huan Sun [paper] 2023.5

  7. Adaptive Chameleon or Stubborn Sloth: Unraveling the Behavior of Large Language Models in Knowledge Clashes

    Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, Yu Su [paper] 2023.5

  8. LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

    Philippe Laban, Wojciech Kryściński, Divyansh Agarwal, Alexander R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu [paper] 2023.5

  9. Evaluating the Factual Consistency of Large Language Models Through News Summarization

    Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, Colin Raffel [paper] 2023.5

  10. Methods for Measuring, Updating, and Visualizing Factual Beliefs in Language Models

    Peter Hase, Mona Diab, Asli Celikyilmaz, Xian Li, Zornitsa Kozareva, Veselin Stoyanov, Mohit Bansal, Srinivasan Iyer [paper] 2023.5

  11. How Language Model Hallucinations Can Snowball

    Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith [paper] 2023.5

  12. Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation

    Niels Mündler, Jingxuan He, Slobodan Jenko, Martin Vechev [paper] 2023.5

  13. Evaluating Factual Consistency of Texts with Semantic Role Labeling

    Jing Fan, Dennis Aumiller, Michael Gertz [paper] 2023.5

  14. FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation

    Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi [paper] 2023.5

  15. Sources of Hallucination by Large Language Models on Inference Tasks

    Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman [paper] 2023.5

  16. KoLA: Carefully Benchmarking World Knowledge of Large Language Models

    Jifan Yu, Xiaozhi Wang, Shangqing Tu, Shulin Cao, Daniel Zhang-Li, Xin Lv, Hao Peng, Zijun Yao, Xiaohan Zhang, Hanming Li, Chunyang Li, Zheyuan Zhang, Yushi Bai, Yantao Liu, Amy Xin, Nianyi Lin, Kaifeng Yun, Linlu Gong, Jianhui Chen, Zhili Wu, Yunjia Qi, Weikai Li, Yong Guan, Kaisheng Zeng, Ji Qi, Hailong Jin, Jinxin Liu, Yu Gu, Yuan Yao, Ning Ding, Lei Hou, Zhiyuan Liu, Bin Xu, Jie Tang, Juanzi Li [paper] 2023.6

  17. Generating Benchmarks for Factuality Evaluation of Language Models

    Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham [paper] 2023.7

Mitigation of Hallucination for LLMs

  1. Factuality Enhanced Language Models for Open-Ended Text Generation

    Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro [paper] 2022.6

  2. Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback

    Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, Jianfeng Gao [paper] 2023.2

  3. SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

    Potsawee Manakul, Adian Liusie, Mark J. F. Gales [paper] 2023.3

  4. Zero-shot Faithful Factual Error Correction

    Kung-Hsiang Huang, Hou Pong Chan, Heng Ji [paper] 2023.5

  5. CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing

    Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, Weizhu Chen [paper] 2023.5

  6. PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions

    Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu [paper] 2023.5

  7. Mitigating Language Model Hallucination with Interactive Question-Knowledge Alignment

    Shuo Zhang, Liangming Pan, Junzhou Zhao, William Yang Wang [paper] 2023.5

  8. Improving Factuality and Reasoning in Language Models through Multiagent Debate

    Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch [paper] 2023.5

  9. Enabling Large Language Models to Generate Text with Citations

    Tianyu Gao, Howard Yen, Jiatong Yu, Danqi Chen [paper] 2023.5

  10. Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

    Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing [paper] 2023.5

  11. Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models

    Miaoran Li, Baolin Peng, Zhu Zhang [paper] 2023.5

  12. Augmented Large Language Models with Parametric Knowledge Guiding

    Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang [paper] 2023.5

  13. LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

    Philippe Laban, Wojciech Kryściński, Divyansh Agarwal, Alexander R. Fabbri, Caiming Xiong, Shafiq Joty, Chien-Sheng Wu [paper] 2023.5

  14. Measuring and Modifying Factual Knowledge in Large Language Models

    Pouya Pezeshkpour [paper] 2023.6

  15. Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

    Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg [paper] 2023.6

  16. LLM Calibration and Automatic Hallucination Detection via Pareto Optimal Self-supervision

    Theodore Zhao, Mu Wei, J. Samuel Preston, Hoifung Poon [paper] 2023.6

  17. A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation

    Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, Dong Yu [paper] 2023.7