hallucination

There are 49 repositories under hallucination topic.

  • Libr-AI/OpenFactVerification

    Loki: Open-source solution designed to automate the process of verifying factuality

    Language:Python1k5745
  • jxzhangjhu/Awesome-LLM-Uncertainty-Reliability-Robustness

    Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models

  • BradyFU/Woodpecker

    ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.

    Language:Python612151229
  • amazon-science/RefChecker

    RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.

    Language:Python305101531
  • FuxiaoLiu/LRV-Instruction

    [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

    Language:Python258112313
  • tianyi-lab/HallusionBench

    [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models

    Language:Python2504117
  • IAAR-Shanghai/UHGEval

    [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.

    Language:Python18412417
  • IAAR-Shanghai/ICSFSurvey

    Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.

    Language:Jupyter Notebook1653194
  • xieyuquanxx/awesome-Large-MultiModal-Hallucination

    😎 curated list of awesome LMM hallucinations papers, methods & resources.

  • ictnlp/TruthX

    Code for ACL 2024 paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space"

    Language:Python132546
  • zjunlp/FactCHD

    [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection

    Language:Python81423
  • zjunlp/KnowledgeCircuits

    [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers

    Language:Python76643
  • yfzhang114/LLaVA-Align

    This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strategy.

    Language:Python73282
  • AmourWaltz/Reliable-LLM

    Language:JavaScript62114
  • HillZhang1999/ICD

    Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"

    Language:Python61336
  • deshwalmahesh/PHUDGE

    Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute, relative and much more. It contains a list of all the available tool, methods, repo, code etc to detect hallucination, LLM evaluation, grading and much more.

    Language:Jupyter Notebook48117
  • anlp-team/LTI_Neural_Navigator

    "Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases" by Jiarui Li and Ye Yuan and Zehua Zhang

    Language:HTML39102
  • dmis-lab/OLAPH

    OLAPH: Improving Factuality in Biomedical Long-form Question Answering

    Language:Python38314
  • 345ishaan/DenseLidarNet

    Language:Jupyter Notebook34528
  • sled-group/3D-GRAND

    Official Implementation of 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs

  • germain-hug/NeurHal

    Visual Correspondence Hallucination: Towards Geometric Reasoning (Under Review)

  • NishilBalar/Awesome-LVLM-Hallucination

    up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources

  • fanqiwan/KCA

    EMNLP'2024: Knowledge Verification to Nip Hallucination in the Bud

    Language:Python20000
  • zjunlp/EasyDetect

    [ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.

    Language:Python19511
  • ahnjaewoo/timechara

    🧙🏻Code and benchmark for our Findings of ACL 2024 paper - "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models"

    Language:Python18200
  • XinYuANU/FaceAttr

    CVPR2018 Face Super-resolution with supplementary Attributes

    Language:Lua18127
  • zjunlp/Deco

    MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation

    Language:Python1751
  • zjunlp/NLPCC2024_RegulatingLLM

    [NLPCC 2024] Shared Task 10: Regulating Large Language Models

  • hanmenghan/Skip-n

    This repository contains the code of our paper 'Skip \n: A simple method to reduce hallucination in Large Vision-Language Models'.

    Language:Python11200
  • yg211/explainable-metrics

    An explainable sentence similarity measurement

    Language:Jupyter Notebook11223
  • edshkim98/LocalDiffusion-Hallucination

    Official code for 'Tackling Structural Hallucination in Image Translation with Local Diffusion' (ECCV'24 Oral)

    Language:Python10200
  • llm-editing/HalluEditBench

    Can Knowledge Editing Really Correct Hallucinations?

    Language:Python7202
  • qqplot/dcpmi

    [NAACL24] Official Implementation of Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information

    Language:Python6102
  • weijiaheng/CHALE

    Controlled HALlucination-Evaluation (CHALE) Question-Answering Dataset

    Language:Python6200
  • thuanystuart/DD3412-chain-of-verification-reproduction

    Re-implementation of the paper "Chain-of-Verification Reduces Hallucination in Large Language Models" for hallucination reduction. Developed as a final project of the Advanced Deep Learning course (DD3412) at KTH.

    Language:Python4200
  • wisecubeai/pythia

    Open source AI hallucination monitoring

    Language:Python42190