hallucinations
There are 32 repositories under hallucinations topic.
vectara/hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
BradyFU/Woodpecker
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
EdinburghNLP/awesome-hallucination-detection
List of papers on hallucination detection in LLMs.
voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
safeguards-ai/safeguards-shield
Build accurate and secure AI applications to unlock value faster
IAAR-Shanghai/UHGEval
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
ictnlp/TruthX
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
PKU-YuanGroup/Hallucination-Attack
Attack to induce LLMs within hallucinations
voidism/Lookback-Lens
Official implementation for the paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
rungalileo/hallucination-index
Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.
X-PLUG/mPLUG-HalOwl
mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating
BillChan226/HALC
[ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"
OpenKG-ORG/EasyDetect
An Easy-to-use Hallucination Detection Framework for LLMs.
hongbinye/Cognitive-Mirage-Hallucinations-in-LLMs
Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"
intuit/sac3
Official repo for SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
ChanLiang/CONNER
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
intuit-ai-research/DCR-consistency
DCR-Consistency: Divide-Conquer-Reasoning for Consistency Evaluation and Improvement of Large Language Models
KaijuML/dtt-multi-branch
Code for Controlling Hallucinations at Word Level in Data-to-Text Generation (C. Rebuffel, M. Roberti, L. Soulier, G. Scoutheeten, R. Cancelliere, P. Gallinari)
141forever/DiaHalu
This is the repository for the paper DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models (EMNLP2024 findings)
KaijuML/PARENTing-rl
Code for PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation (Rebuffel, Soulier, Scoutheeten, Gallinari; INLG 2020)
nasib-ullah/THVC
A PyTorch implementation of the paper Thinking Hallucination for Video Captioning.
IAAR-Shanghai/UHGEval-dataset
The full pipeline of creating UHGEval hallucination dataset
bsenst/streamlit-llm
The purpose of this application is to test LLM-generated interpretations of medical observations. The explanations are generated fully automatically by a large language model. This application should be used for experimental purposes only. It does not provide support for real world cases and does not replace advice from medical professionals.
comp-imaging-sci/hallucinations-tomo-recon
Codes related to the paper "On hallucinations in tomographic imaging"
vyraun/hallucinations
Code for "The Curious Case of Hallucinations in Neural Machine Translation".
ModelDBRepository/229278
Hierarchical Gaussian Filter (HGF) model of conditioned hallucinations task (Powers et al 2017)
aryand1/HALOMIN-Hallucination-Limitation-in-Knowledge-Graphs-via-Model-Integrity
This repo aims to remove/minimize hallucination introduced through large language models in development of KG
SingularityLabs-ai/truthgpt-for-google-extension-mini
[TruthGPT](https://github.com/SingularityLabs-ai/TruthGPT-mini) for google
VerseMetaVerse/GPT
Hallucinate - GPT - LLM - AI Chat - OpenAI - Sam Altman info
Pavansomisetty21/Langchain-Tutorial
langchain tutorial using gemini
rafaelsandroni/antibodies
Antibodies for LLMs hallucinations (grouping LLM as a judge, NLI, reward models)