🚀🚀🚀 This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Training).
- Learning Transferable Visual Models From Natural Language Supervision [CODE]
- CLIP: Connecting Text and Images
- Multimodal Neurons in Artificial Neural Networks
- OpenCLIP: includes larger and independently trained CLIP models up to ViT-G/14
- Hugging Face implementation of CLIP: for easier integration with the HF ecosystem
-
[CoOP] Learning to Prompt for Vision-Language Models, IJCV 2022.
-
[CLIP-Adapter] CLIP-Adapter: Better Vision-Language Models with Feature Adapters, arXiv 2110.
-
[VT-CLIP] VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts, arXiv 2112.
-
[CoCoOp] Conditional Prompt Learning for Vision-Language Models, CVPR 2022.
-
[ProGrad] Prompt-aligned Gradient for Prompt Tuning, ICCV 2023.
-
[SgVA-CLIP] SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification, IEEE Transactions on Multimedia 202309.
-
[Tip-Adapter] Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling, ECCV 2022.
-
[CALIP-FS] CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention, AAAI 2023.
-
[SuS-X] SuS-X: Training-Free Name-Only Transfer of Vision-Language Models, ICCV 2023.
-
[Cross-Modal] Multimodality Helps Unimodality: Cross-Modal Few-Shot Learning with Multimodal Models, CVPR 2023.
-
[CaFo] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners, CVPR 2023.
-
[APE] Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement, ICCV 2023.
-
[PRO] Read-only Prompt Optimization for Vision-Language Few-shot Learning, ICCV 2023.
-
[Proto-CLIP] Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning, arXiv 2307.
- [CALIP] CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention, AAAI 2023.
- [CuPL] Generating customized prompts for zero-shot image classification.
- [WiSE-FT] Robust fine-tuning of zero-shot models, CVPR 2022.