hxhcreate's Stars
YangLing0818/buffer-of-thought-llm
[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Alpha-VLLM/Lumina-T2X
Lumina-T2X is a unified framework for Text to Any Modality Generation
yihedeng9/STIC
Enhancing Large Vision Language Models with Self-Training on Image Comprehension.
takomc/amp
【NeurIPS 2024】The official code of paper "Automated Multi-level Preference for MLLMs"
EvolvingLMMs-Lab/lmms-eval
Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.
AtsuMiyai/UPD
Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
wkentaro/gdown
Google Drive Public File Downloader when Curl/Wget Fails
open-compass/VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks
OpenMOSS/Say-I-Dont-Know
[ICML'2024] Can AI Assistants Know What They Don't Know?
vlf-silkie/VLFeedback
YiyangZhou/POVID
[Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning
ZHZisZZ/modpo
[ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
prometheus-eval/prometheus-eval
Evaluate your LLM's response with Prometheus and GPT4 💯
HVision-NKU/StoryDiffusion
Accepted as [NeurIPS 2024] Spotlight Presentation Paper
NVIDIA/NeMo-Aligner
Scalable toolkit for efficient model alignment
tianyi-lab/HallusionBench
[CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
xieyuquanxx/awesome-Large-MultiModal-Hallucination
😎 curated list of awesome LMM hallucinations papers, methods & resources.
magic-research/PLLaVA
Official repository for the paper PLLaVA
Vchitect/VBench
[CVPR2024 Highlight] VBench - We Evaluate Video Generation
ydyjya/Awesome-LLM-Safety
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
mustvlad/ChatGPT-System-Prompts
This repository contains a collection of the best system prompts for ChatGPT, a conversational AI model developed by OpenAI. Star this repository to help us reach 5,000 stars!
shikiw/OPERA
[CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
GanjinZero/RRHF
[NIPS2023] RRHF & Wombat
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
prompt-security/ps-fuzz
Make your GenAI Apps Safe & Secure :rocket: Test & harden your system prompt
geekan/HowToLiveLonger
程序员延寿指南 | A programmer's guide to live longer
NVIDIA/garak
the LLM vulnerability scanner
OSU-NLP-Group/AmpleGCG
AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM
thunlp/Muffin