multi-modal
There are 405 repositories under multi-modal topic.
OpenBMB/MiniCPM-V
MiniCPM-V 4.5: A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone
agentscope-ai/agentscope
AgentScope: Agent-Oriented Programming for Building LLM Applications
OpenGVLab/InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
activeloopai/deeplake
Database for AI. Store Vectors, Images, Texts, Videos, etc. Use with LLMs/LangChain. Store, query, version, & visualize any AI data. Stream data in real-time to PyTorch/TensorFlow. https://activeloop.ai
modelscope/modelscope
ModelScope: bring the notion of Model-as-a-Service to life.
TEN-framework/ten-framework
Open-source framework for conversational voice AI agents.
zai-org/CogVLM
a state-of-the-art-level open visual language model | 多模态预训练模型
lucidrains/DALLE-pytorch
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
OFA-Sys/Chinese-CLIP
Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
modelscope/data-juicer
Data processing for and with foundation models! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷
valhalla/valhalla
Open Source Routing Engine for OpenStreetMap
marqo-ai/marqo
Unified embedding generation and search engine. Also available on cloud - cloud.marqo.ai
VectorSpaceLab/OmniGen
OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340
zai-org/VisualGLM-6B
Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
zjunlp/DeepKE
[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
SciSharp/LLamaSharp
A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.
PKU-YuanGroup/Video-LLaVA
【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
docarray/docarray
Represent, send, store and search multimodal data
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
zai-org/CogVLM2
GPT4V-level open-source multi-modal model based on Llama3-8B
dvlab-research/LISA
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
PKU-YuanGroup/MoE-LLaVA
【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models
tangxyw/RecSysPapers
推荐/广告/搜索领域工业界经典以及最前沿论文集合。A collection of industry classics and cutting-edge papers in the field of recommendation/advertising/search.
Kav-K/GPTDiscord
A robust, all-in-one GPT interface for Discord. ChatGPT-style conversations, image generation, AI-moderation, custom indexes/knowledgebase, youtube summarizer, and more!
OpenMotionLab/MotionGPT
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
IntelLabs/fastRAG
Efficient Retrieval Augmentation and Generation Framework
DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
vercel/modelfusion
The TypeScript library for building AI applications.
MedMNIST/MedMNIST
[pip install medmnist] 18x Standardized Datasets for 2D and 3D Biomedical Image Classification
lucidrains/transfusion-pytorch
Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI
Tebmer/Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
PKU-YuanGroup/LanguageBind
【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment
AnswerDotAI/byaldi
Use late-interaction multi-modal models such as ColPali in just a few lines of code.
microsoft/farmvibes-ai
FarmVibes.AI: Multi-Modal GeoSpatial ML Models for Agriculture and Sustainability
OpenBMB/VisRAG
Parsing-free RAG supported by VLMs