Pinned Repositories
LRV-Instruction
[ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
bert4nlp
BJTU-23ML-Homework
debias-baseline
Echoes
Code for the paper: "Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo Chamber (ACM Multimedia 2023)"
isruihu.github.io
spiders
OPERA
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
LURE
[ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models
HalluciDoctor
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)
isruihu's Repositories
isruihu/Echoes
Code for the paper: "Echoes: Unsupervised Debiasing via Pseudo-bias Labeling in an Echo Chamber (ACM Multimedia 2023)"
isruihu/BJTU-23ML-Homework
isruihu/debias-baseline
isruihu/bert4nlp
isruihu/isruihu.github.io
isruihu/spiders