multimodal-llm
There are 12 repositories under multimodal-llm topic.
eric-ai-lab/MiniGPT-5
Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"
alipay/Ant-Multi-Modal-Framework
Research Code for Multimodal-Cognition Team in Ant Group
Zhoues/MineDreamer
This repo is the official implementation of "MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control "
UCSC-VLAA/vllm-safety-benchmark
[ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"
shanface33/GPT4MF_UB
Official repository of the paper: Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics
HenryPengZou/ImplicitAVE
[ACL 2024 Findings] Dataset and Code of "ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction"
zhudotexe/kani-vision
Kani extension for supporting vision-language models (VLMs). Comes with model-agnostic support for GPT-Vision and LLaVA.
iamaziz/chat_with_images
Streamlit app to chat with images using Multi-modal LLMs.
autodistill/autodistill-llava
LLaVA base model for use with Autodistill.
aastroza/cachai
The future of AI is speaking Chilean, cachai?
abdur75648/MedicalGPT
Medical Report Generation And VQA (Adapting XrayGPT to Any Modality)
ChocoWu/SeTok-web
This is the project webpage for 'SeTok'.