hallucination
There are 40 repositories under hallucination topic.
Libr-AI/OpenFactVerification
Loki: Open-source solution designed to automate the process of verifying factuality
BradyFU/Woodpecker
✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.
jxzhangjhu/Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
FuxiaoLiu/LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
amazon-science/RefChecker
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
tianyi-lab/HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
IAAR-Shanghai/UHGEval
[ACL 2024] Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
xieyuquanxx/awesome-Large-MultiModal-Hallucination
😎 up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.
ictnlp/TruthX
Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"
zjunlp/FactCHD
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
yfzhang114/LLaVA-Align
This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
deshwalmahesh/PHUDGE
Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.
anlp-team/LTI_Neural_Navigator
"Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang
germain-hug/NeurHal
Visual Correspondence Hallucination: Towards Geometric Reasoning (Under Review)
zjunlp/KnowledgeCircuits
Knowledge Circuits in Pretrained Transformers
fanqiwan/KCA
Knowledge Verification to Nip Hallucination in the Bud
sled-group/3D-GRAND
Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs
XinYuANU/FaceAttr
CVPR2018 Face Super-resolution with supplementary Attributes
ahnjaewoo/timechara
🧙🏻Code and benchmark for our Findings of ACL 2024 paper - "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models"
zjunlp/NLPCC2024_RegulatingLLM
[NLPCC 2024] Shared Task 10: Regulating Large Language Models
yg211/explainable-metrics
An explainable sentence similarity measurement
hanmenghan/Skip-n
This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.
zjunlp/EasyDetect
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
NishilBalar/Awesome-LVLM-Hallucination
up-to-date and curated list of awesome state-of-the-art LVLMs hallucinations research work, papers & resources
thuanystuart/DD3412-chain-of-verification-reproduction
Re-implementation of the paper "Chain-of-Verification Reduces Hallucination in Large Language Models" for hallucination reduction. Developed as a final project of the Advanced Deep Learning course (DD3412) at KTH.
weijiaheng/CHALE
Controlled HALlucination-Evaluation (CHALE) Question-Answering Dataset
18907305772/KCA
Knowledge Verification to Nip Hallucination in the Bud
qqplot/dcpmi
[NAACL24] Official Implementation of Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
robertbenson/docker_openai_custom_weather_demo
openai function calling demo that gets customizable weather information
wangtz19/DecodingStrategy
Unofficial implementations for optimized decoding strategies of large language models
CrackedResearcher/LLMVerify
Verify outputs generated by LLMs backed with real time data
DIDSR/mpi_sfrc
sFRC: To identify fakes in medical images reconstructed using AI
robertbenson/openai_assistant_code_interpreter
openai assistant using code interpreter
vr25/lrec-coling-hallucination-tutorial
LREC-COLING 2024 Tutorial