jingchenchen's Stars
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
mlfoundations/open_flamingo
An open-source framework for training large multimodal models.
deepseek-ai/DeepSeek-VL
DeepSeek-VL: Towards Real-World Vision-Language Understanding
HillZhang1999/llm-hallucination-survey
Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models"
Ruzim/NSFC-application-template-latex
国家自然科学基金申请书正文(面上项目)LaTeX 模板(非官方)
allenai/visprog
Official code for VisProg (CVPR 2023 Best Paper!)
penghao-wu/vstar
PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"
showlab/Awesome-MLLM-Hallucination
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
aim-uofa/Matcher
[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
junkunyuan/Awesome-Domain-Generalization
Awesome things about domain generalization, including papers, code, etc.
huangwb8/ChineseResearchLaTeX
**科研常用LaTeX模板集
OpenGVLab/PonderV2
PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm
aim-uofa/MovieDreamer
Timothyxxx/RetrivalLMPapers
Paper collections of retrieval-based (augmented) language model.
yuhangzang/ContextDET
Contextual Object Detection with Multimodal Large Language Models
Adamdad/Awesome-ComposableAI
A curated list of Composable AI methods: Building AI system by composing modules.
PengtaoJiang/Segment-Anything-CLIP
Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works
xmed-lab/CLIPN
ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No
BatsResearch/csp
Learning to compose soft prompts for compositional zero-shot learning.
haoosz/ade-czsl
[CVPR 2023] Learning Attention as Disentangler for Compositional Zero-shot Learning
arijitray1993/COLA
COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!
YukunLi99/AdaptSAM
bighuang624/Troika
[CVPR 2024] Troika: Multi-Path Cross-Modal Traction for Compositional Zero-Shot Learning
Forest-art/DFSP
aim-uofa/VLModel
Repo of HawkLlama.
jamessealesmith/ConStruct-VL
PyTorch code for the CVPR'23 paper: "ConStruct-VL: Data-Free Continual Structured VL Concepts Learning"
wahr0411/PTADisc
NeverMoreLCH/SSL2CG
Implementation for the paper "Exploring the Effect of Primitives for Compositional Generalization in Vision-and-Language" (CVPR 2023)
YanyuanQiao/VLN-PETL
Code of the ICCV 2023 paper "VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation""
NeverMoreLCH/CG-SPS
Implementation and Dataset for the paper "Compositional Substitutivity of Visual Reasoning for Visual Question Answering" (ECCV 2024)