hallucination
There are 49 repositories under hallucination topic.
Libr-AI/OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality
jxzhangjhu/Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
BradyFU/Woodpecker
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
amazon-science/RefChecker
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
FuxiaoLiu/LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
tianyi-lab/HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
IAAR-Shanghai/UHGEval
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
IAAR-Shanghai/ICSFSurvey
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
xieyuquanxx/awesome-Large-MultiModal-Hallucination
😎 curated list of awesome LMM hallucinations papers, methods & resources.
ictnlp/TruthX
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
zjunlp/FactCHD
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
zjunlp/KnowledgeCircuits
[NeurIPS 2024] Knowledge Circuits in Pretrained Transformers
yfzhang114/LLaVA-Align
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
deshwalmahesh/PHUDGE
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.
anlp-team/LTI_Neural_Navigator
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
dmis-lab/OLAPH
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
sled-group/3D-GRAND
Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs
germain-hug/NeurHal
Visual Correspondence Hallucination: Towards Geometric Reasoning (Under Review)
NishilBalar/Awesome-LVLM-Hallucination
up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources
fanqiwan/KCA
EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud
zjunlp/EasyDetect
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
ahnjaewoo/timechara
🧙🏻Code and benchmark for our Findings of ACL 2024 paper - "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models"
XinYuANU/FaceAttr
CVPR2018 Face Super-resolution with supplementary Attributes
zjunlp/Deco
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
zjunlp/NLPCC2024_RegulatingLLM
[NLPCC 2024] Shared Task 10: Regulating Large Language Models
hanmenghan/Skip-n
This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.
yg211/explainable-metrics
An explainable sentence similarity measurement
edshkim98/LocalDiffusion-Hallucination
Official code for 'Tackling Structural Hallucination in Image Translation with Local Diffusion' (ECCV'24 Oral)
llm-editing/HalluEditBench
Can Knowledge Editing Really Correct Hallucinations?
qqplot/dcpmi
[NAACL24] Official Implementation of Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
weijiaheng/CHALE
Controlled HALlucination-Evaluation (CHALE) Question-Answering Dataset
thuanystuart/DD3412-chain-of-verification-reproduction
Re-implementation of the paper "Chain-of-Verification Reduces Hallucination in Large Language Models" for hallucination reduction. Developed as a final project of the Advanced Deep Learning course (DD3412) at KTH.
wisecubeai/pythia
Open source AI hallucination monitoring