mllm

There are 101 repositories under mllm topic.

  • microsoft/unilm

    Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

    Language:Python20.5k3071.4k2.6k
  • X-PLUG/MobileAgent

    Mobile-Agent: The Powerful Mobile Device Operation Assistant Family

    Language:Python3.3k5568308
  • InternLM/InternLM-XComposer

    InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions

    Language:Python2.7k44406163
  • magic-quill/MagicQuill

    Official Implementations for Paper - MagicQuill: An Intelligent Interactive Image Editing System

    Language:Python2.5k3097236
  • atfortes/Awesome-LLM-Reasoning

    Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought and OpenAI o1 🍓

  • X-PLUG/mPLUG-DocOwl

    mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understanding

    Language:Python2k33125117
  • cambrian-mllm/cambrian

    Cambrian-1 is a family of multimodal LLMs with a vision-centric design.

    Language:Python1.8k2371123
  • BAAI-DCAI/Bunny

    A family of lightweight multimodal models.

    Language:Python9702013073
  • CircleRadon/Osprey

    [CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"

    Language:Python782154542
  • simular-ai/Agent-S

    Agent S: an open agentic framework that uses computers like a human

    Language:Python735221599
  • BradyFU/Woodpecker

    ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.

    Language:Python627161330
  • FoundationVision/Groma

    [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization

    Language:Python592363661
  • NVlabs/EAGLE

    EAGLE: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders

    Language:Python566312345
  • dvlab-research/LLMGA

    This project is the official implementation of 'LLMGA: Multimodal Large Language Model based Generation Assistant', ECCV2024 Oral

    Language:Python46713529
  • SkyworkAI/Vitron

    NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing

    Language:Python456141926
  • gokayfem/ComfyUI_VLM_nodes

    Custom ComfyUI nodes for Vision Language Models, Large Language Models, Image to Music, Text to Music, Consistent and Random Creative Prompt Generation

    Language:Python442711742
  • YingqingHe/Awesome-LLMs-meet-Multimodal-Generation

    🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).

    Language:HTML40115523
  • Coobiw/MPP-LLaVA

    Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.

    Language:Jupyter Notebook39653521
  • X-PLUG/Youku-mPLUG

    Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks

    Language:Python28863211
  • Atomic-man007/Awesome_Multimodel_LLM

    Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Models (MLLM). It covers datasets, tuning techniques, in-context learning, visual reasoning, foundational models, and more. Stay updated with the latest advancement.

  • baaivision/EVE

    [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models

    Language:Python2579165
  • CircleRadon/TokenPacker

    The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".

    Language:Python2369159
  • X-PLUG/mPLUG-2

    mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)

    Language:Python22052519
  • TIGER-AI-Lab/Mantis

    Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)

    Language:Python19292115
  • Yui010206/SeViLA

    [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering

    Language:Python18232722
  • bz-lab/AUITestAgent

    AUITestAgent is the first automatic, natural language-driven GUI testing tool for mobile apps, capable of fully automating the entire process of GUI interaction and function verification.

  • ZebangCheng/Emotion-LLaMA

    Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning

    Language:Python16472914
  • FoundationVision/GenerateU

    [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection

    Language:Python1517176
  • sterzhang/image-textualization

    Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)

    Language:Python151459
  • TideDra/VL-RLHF

    A RLHF Infrastructure for Vision-Language Models

    Language:Python1424177
  • DAMO-NLP-SG/VideoRefer

    The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"

    Language:Python133615
  • baaivision/DenseFusion

    DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception

    Language:Python130551
  • thu-ml/MMTrustEval

    A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)

    Language:Python122668
  • IDEA-Research/ChatRex

    Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding

    Language:Python118473
  • lerogo/MMGenBench

    Official repository of MMGenBench

    Language:Python117305
  • zjrwtx/SFT-data-builder

    利用免费的大模型api来结合你的私域数据来生成sft训练数据(妥妥白嫖)支持llamafactory等工具的训练数据格式synthetic data

    Language:JavaScript114238