reasoning-survey/Awesome-Reasoning-Foundation-Models

Two related references on LLM-generated misinformation

canyuchen opened this issue · 1 comments

Congratulations on your recent solid survey paper! I am impressed by the depth and comprehensiveness of the survey paper.

I would greatly appreciate it if you could consider citing our work [1][2] in LLMs can contribute to the dissemination of misinformation, both intentionally and unintentionally in "Section 6.2 Interpretability and Transparency", or the "Hallucinations" section of "Section 5 Discussion: Challenges, Limitations, and Risks", or Various intended attacks have been identified, including the ... disinformation in "Section 6.1 Safety and Privacy"

You could also check out our project website: https://llm-misinformation.github.io/ Thanks a lot!

[1] Combating Misinformation in the Age of LLMs: Opportunities and Challenges https://arxiv.org/abs/2311.05656

  • TL;DR: A survey of the oppotunities (can we utilize LLMs to combat misinformation) and challenges (how to combat LLM-generated misinformation) of combating misinformation in the age of LLMs.
  • abstract: Misinformation such as fake news and rumors is a serious threat on information ecosystems and public trust. The emergence of Large Language Models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emergent question is: how to utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM-generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.

[2] Can LLM-Generated Misinformation Be Detected? https://arxiv.org/abs/2309.13788

  • TL;DR: We discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm.
  • abstract: The advent of Large Language Models (LLMs) has made a transformative impact. However, the potential that LLMs such as ChatGPT can be exploited to generate misinformation has posed a serious concern to online safety and public trust. A fundamental research question is: will LLM-generated misinformation cause more harm than human-written misinformation? We propose to tackle this question from the perspective of detection difficulty. We first build a taxonomy of LLM-generated misinformation. Then we categorize and validate the potential real-world methods for generating misinformation with LLMs. Then, through extensive empirical investigation, we discover that LLM-generated misinformation can be harder to detect for humans and detectors compared to human-written misinformation with the same semantics, which suggests it can have more deceptive styles and potentially cause more harm. We also discuss the implications of our discovery on combating misinformation in the age of LLMs and the countermeasures.

Thank you very much for your detailed description. This really helps a lot!

We have added the two works to our survey and the GitHub Repository. If there are any other works that we miss, please let us know.

Again, thank you very much for your great work for the Reasoning Community. Thank you very much for your attention to our work, and wish you a good day!