cross-modal-retrieval

There are 72 repositories under cross-modal-retrieval topic.

  • clip-as-service

    jina-ai/clip-as-service

    🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP

    Language:Python12.2k2216062.1k
  • YehLi/xmodaler

    X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).

    Language:Python1k3562111
  • Paranioar/Awesome_Matching_Pretraining_Transfering

    The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.

  • Image-Text-Embedding

    layumi/Image-Text-Embedding

    TOMM2020 Dual-Path Convolutional Image-Text Embedding :feet: https://arxiv.org/abs/1711.05535

    Language:MATLAB282121873
  • zjukg/KG-MM-Survey

    Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey

  • Paranioar/SGRAF

    [AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”

    Language:Python20251837
  • woodfrog/vse_infty

    Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

    Language:Python1504918
  • penghu-cs/DSCMR

    Deep Supervised Cross-modal Retrieval (CVPR 2019, PyTorch Code)

    Language:Python13851226
  • yalesong/pvse

    Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval (CVPR 2019)

    Language:Python13141724
  • slavabarkov/tidy

    Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine

    Language:Kotlin12872213
  • naver-ai/pcme

    Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)

    Language:Python12041017
  • howard-hou/BagFormer

    PyTorch code for BagFormer: Better Cross-Modal Retrieval via bag-wise interaction

    Language:Python11330033
  • jpthu17/EMCL

    [NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations

    Language:Python106348
  • jpthu17/DiffusionRet

    [ICCV 2023] DiffusionRet: Generative Text-Video Retrieval with Diffusion Model

    Language:Python103394
  • ilaria-manco/muscall

    Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)

    Language:Python99638
  • jpthu17/HBI

    [CVPR 2023 Highlight] Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning

    Language:Python95474
  • AyanKumarBhunia/on-the-fly-FGSBIR

    [CVPR 2020, Oral] "Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. .

    Language:Python584815
  • naver-ai/eccv-caption

    Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)

    Language:Python52242
  • penghu-cs/MRL

    Learning Cross-Modal Retrieval with Noisy Labels (CVPR 2021, PyTorch Code)

    Language:Python4831010
  • jpthu17/DiCoSA

    [IJCAI 2023] Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment

    Language:Python43272
  • naver-ai/pcmepp

    Official Pytorch implementation of "Improved Probabilistic Image-Text Representations" (ICLR 2024)

    Language:Python43341
  • BrandonHanx/TextReID

    [BMVC 2021] Text-Based Person Search with Limited Data

    Language:Python422125
  • mako443/Text2Pos-CVPR2022

    Code, dataset and models for our CVPR 2022 publication "Text2Pos"

    Language:Python37397
  • penghu-cs/UCCH

    Unsupervised Contrastive Cross-modal Hashing (IEEE TPAMI 2023, PyTorch Code)

    Language:Python373229
  • WendellGul/AGAH

    Source code for paper "Adversary Guided Asymmetric Hashing for Cross-Modal Retrieval".

    Language:Python362711
  • penghu-cs/SDML

    Scalable deep multimodal learning for cross-modal retrieval (SIGIR 2019, PyTorch Code)

    Language:Python342313
  • kyuyeonpooh/objects-that-sound

    The unofficial implementation of paper, "Objects that Sound", from ECCV 2018.

    Language:Python32534
  • LivXue/GNN4CMR

    PyTorch implementation of the AAAI-21 paper "Dual Adversarial Label-aware Graph Neural Networks for Cross-modal Retrieval" and the TPAMI-22 paper "Integrating Multi-Label Contrastive Learning with Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval".

    Language:Python32204
  • penghu-cs/MAN

    Multimodal Adversarial Network for Cross-modal Retrieval (PyTorch Code)

    Language:Python29226
  • idealwhite/VLDeformer

    Pytorch implement of the paper "VLDeformer: Vision Language Decomposed Transformer for Fast Cross-modal Retrieval", KBS 2022

    Language:Jupyter Notebook26203
  • Paranioar/RCAR

    [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”

    Language:Python25112
  • ailab-kyunghee/CM2_DVC

    [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval

    Language:Python21120
  • ict-bigdatalab/VNEL

    Dataset and code for EMNLP 2022 "Visual Named Entity Linking: A New Dataset and A Baseline"

  • xiaoyuan1996/SemanticLocalizationMetrics

    The first research for semantic localization

    Language:Python21345
  • jaychempan/SWAN-pytorch

    Official Code for “Reducing Semantic Confusion: Scene-aware Aggregation Network for Remote Sensing Cross-modal Retrieval” (ICMR'23 Oral)

    Language:Python19224
  • penghu-cs/MvLDAN

    Multi-view Linear Discriminant Analysis Network for Cross-modal Retrieval and Cross-view Recognition (Keras&Theano Code)

    Language:Python18126