/Awesome-Prompt-Learning-CV

This repository is a collection of awesome things about vision prompts, including papers, code, etc.

MIT LicenseMIT

Awesome-Prompt-Learning-CV

Awesome

This repository is a collection of awesome things about Prompt Learning, including papers, code, etc.

If you would like to contribute to our repository or have any questions/advice, see Contributing & Contact.

Contents

Papers

We list papers, implementation code (the unofficial code is marked with *), etc, in the order of year and from journals to conferences.

Prompt Learning(accepted): Updating by Linbin Wang

  • CoOp: Learning to Prompt for Vision-Language Models (NTU, Singapore) [IJCV 2022] [PyTorch]
  • CoCoOp: Conditional Prompt Learning for Vision-Language Models (NTU, Singapore) [CVPR 2022] [Code]
  • MaPLe: Multi-modal Prompt Learning [CVPR 2023] [Code]
  • ProDA: Prompt Distribution Learning, CVPR, 2022 (Huawei). [Paper]
  • VPT: Visual Prompt Tuning, ECCV, 2022 (Cornell). [Paper][PyTorch]
  • PerVL: This is my unicorn, Fluffy: Personalizing frozen vision-language representations, ECCV, 2022 (NVIDIA). [Paper][PyTorch]
  • OrdinalCLIP: OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression, NeurIPS, 2022 (Tsinghua University). [Paper][PyTorch]
  • CoOp: Learning to Prompt for Vision-Language Models, IJCV, 2022 (NTU, Singapore). [Paper][PyTorch]
  • DeFo: Learning to Decompose Visual Features with Latent Textual Prompts, ICLR, 2023 (UIUC). [Paper]
  • PLOT: Prompt Learning with Optimal Transport for Vision-Language Models, ICLR, 2023 (CMU). [Paper]
  • ?: Visual Classification via Description from Large Language Models, ICLR, 2023 (Columbia). [Paper]
  • CSP: Learning to Compose Soft Prompts for Compositional Zero-Shot Learning, ICLR, 2023 (Brown University). [Paper][PyTorch]
  • CaFo: Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners, CVPR, 2023 (Shanghai AI Lab). [Paper][PyTorch]
  • ?: Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR, 2023 (NYCU). [Paper][PyTorch (in construction)][Website]
  • DAM-VP: Diversity-Aware Meta Visual Prompting, CVPR, 2023 (USTC). [Paper][Code (in construction)]
  • ILM-VP: Understanding and Improving Visual Prompting: A Label-Mapping Perspective, CVPR, 2023 (Michigan State). [Paper][PyTorch]
  • KgCoOp: Visual-Language Prompt Tuning with Knowledge-guided Context Optimization, CVPR, 2023 (CAS). [Paper][PyTorch]
  • BlackVIP: BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning, CVPR, 2023 (University of Seoul). [Paper][PyTorch (in construction)]
  • EXPRES: Learning Expressive Prompting With Residuals for Vision Transformers, CVPR, 2023 (Amazon). [Paper]
  • ?: Learning to Name Classes for Vision and Language Models, CVPR, 2023 (Huawei). [Paper]
  • PMF: Efficient Multimodal Fusion via Interactive Prompting, CVPR, 2023 (Zhejiang University). [Paper]
  • MaPLe: MaPLe: Multi-modal Prompt Learning, CVPR, 2023 (MBZUAI). [Paper][PyTorch]
  • POUF: POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained models, ICML, 2023 (UT Austin). [Paper][PyTorch]
  • TPT: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models, (UMD). [NIPS 2022][PyTorch]
  • K-LITE:K-LITE: Learning Transferable Visual Models with External Knowledge(Microsoft) [NeurIPS 2022][PyTorch]
  • CALIP:CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention(Peking University) [AAAI 2023][PyTorch]
  • ProReg:Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization(Nanyang Technological University)[AAAI 2023]
  • NLIP:NLIP: Noise-robust Language-Image Pre-training(Sun Yat-sen University)[AAAI 2023]
  • ZS-SBIR:CLIP for All Things Zero-Shot Sketch-Based Image Retrieval,Fine-Grained or Not(University of Surrey)[CVPR 2023][PyTorch]
  • ?:Multimodality Helps Unimodality:Cross-Modal Few-Shot Learning with Multimodal Models(Carnegie Mellon University)[CVPR 2023][PyTorch]
  • SP:Semantic Prompt for Few-Shot Image Recognition(University of Science and Technology of China)[CVPR 2023][PyTorch]
  • CPT:CONTRASTIVE PROMPT TUNING IMPROVES GENERALIZATION IN VISION-LANGUAGE MODELS(MIT-IBM Watson AI Lab)[ICLR 2023]
  • MixGen:MixGen: A New Multi-Modal Data Augmentation(Institute of Information Engineering, CAS)[CVPR 2023][PyTorch]
  • SLIP:SLIP: Self-supervision meets Language-Image Pre-training(UC Berkeley)[ECCV 2022][PyTorch]
  • ?:Rethinking the Value of Prompt Learning for Vision-Language Models(Chinese Academy of Sciences)[ICLR 2023]
  • SubPT:Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models(Chinese Academy of Sciences)[IEEE 2023][PyTorch]
  • CLIP-like:Delving into the Openness of CLIP(Peking University)[ICLR 2023][Code]
  • PTP:Position-guided Text Prompt for Vision-Language Pre-training(National University of Singapore)[IEEE 2022][PyTorch]
  • DPT: Dual Modality Prompt Tuning for Vision-Language Pre-Trained Model(Northwestern Polytechnical University) [TMM 2023][Code]
  • PROTO-CLIP:PROTO-CLIP: Vision-Language Prototypical Network for Few-Shot Learning(The University of Texas at Dallas)[arXiv 2307][Code]

Prompt Learning(arxiv): Updating by Jiajia Zhang

  • LAMM: Label Alignment for Multi-Modal Prompt Learning(Shanghai Jiao Tong University) [arXiv 2312] [Pytorch]
  • ?: Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization(Mohamed Bin Zayed University of AI) [arXiv 2311] [Pytorch]
  • FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning (SCCE) [arXiv 2310] [Pytorch]
  • ?: Prompting Scientific Names for Zero-Shot Species Recognition(Texas A&M University) [arXiv 2310]
  • CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training (Kakaobrain) [arXiv 2310]
  • FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation(South China University of Technology) [arXiv 2310]
  • DA-CLIP: Controlling Vision-Language Models for Universal Image Restoration(Uppsala University) [arXiv 2310]
  • FedTPG: Text-driven Prompt Generation for Vision-Language Models in Federated Learning(Bosch Center for AI) [arXiv 2310]
  • AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models(Bosch Center for Artificial Intelligence) [arXiv 2309]
  • ?: Tuning Multi-mode Token-level Prompt Alignment across Modalities(Xidian University) arXiv 2309]
  • BPT:CLIP-based Synergistic Knowledge Transfer for Text-based Person Retrieval(Shenzhen International Graduate Schoo) [arXiv 2309]
  • ?:Language Models as Black-Box Optimizers for Vision-Language Models(CMU) [arXiv 2309]
  • PRE: Vision-Language Prompt Learning with Reparameterization Encoder(QMUL)[arXiv 2309] [Pytorch]
  • LoGoPrompt: Synthetic Text Images Can Be Good Visual Prompts for Vision-Language Models(School of Information Science and Technology, ShanghaiTech University)[arXiv 2309]
  • ?: Image-Object-Specific Prompt Learning for Few-Shot Class-Incremental Learning(Kaist Advanced Institute of Science and Technology) [arXiv 2309]
  • DuAl-PT: Context-Aware Prompt Tuning for Vision-Language Model with Dual-Alignment(Shanghai Jiao Tong University) [arXiv 2309]
  • DPL: Decoupled Prompt Learning for Vision-Language Models(Nanjing University) [arXiv 2308]
  • ALIP: Adaptive Language-Image Pre-training with Synthetic Caption(DeepGlint) [arXiv 2308] [Pytorch]
  • ICPC: Instance-Conditioned Prompting with Contrastive Learning for Semantic Segmentation(Alibaba Group) [arXiv 2308]
  • UEO: Towards Realistic Unsupervised Fine-tuning with CLIP(CASIA) [arXiv 2308]
  • UP-Adapter: Unsupervised Prototype Adapter for Vision-Language Models(Southern University of Science and Technology) [arXiv 2308]
  • KAPT: Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models(Qilu University of Technology) [arXiv 2308]
  • RPO:Read-only Prompt Optimization for Vision-Language Few-shot Learning(Korea University)[arXiv 2308]
  • UP-Adapter:Unsupervised Prototype Adapter for Vision-Language Models(Southern University of Science and Technology)[arXiv 2308][PyTorch]
  • PromptSRC: Self-regulating Prompts: Foundational Model Adaptation without Forgetting(Mohamed bin Zayed University of AI) [arXiv 2307] [Pytorch]
  • VDP-Adapter :Enhancing CLIP with GPT-4 Harnessing Visual Descriptions as Prompts (Dublin City University)[arXiv 2307] [Pytorch]
  • ICAE: In-context Autoencoder for Context Compression in a Large Language Model(Microsoft)[arXiv 2307]
  • ?: Leveraging Vision-Language Foundation Models for Fine-Grained Downstream Tasks(Conservatoire National des Arts et Métiers, CEDRIC, Paris, France) [arXiv 2307]
  • ?: Unsupervised Calibration through Prior Adaptation for Text Classification using Large Language Models(Instituto de Investigaci´on en Ciencias de la Computaci´on, CONICET-UBA, Argentina) [arXiv 2307]
  • LDP: Language-driven Dual-Pixel Image Defocus Deblurring Network(Beijing Institute of Technology) [arXiv 2307]
  • DVPT: Dynamic Visual Prompt Tuning of Large Pre-trained Models for Medical Image Analysis(Nankai University) [arXiv 2307]
  • MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning(Ola Ahmad) [arXiv 2307]
  • MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained Vision-Language Models (College of Computer Science and Technology,changsha) [arXiv 2306]
  • DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World (Institute of Artificial Intelligence, Beihang University) [arXiv 2306]
  • ?: Soft-prompt Tuning for Large Language Models to Evaluate Bias(Vector Institute for AI) [arXiv 2306]
  • TKDP: Threefold Knowledge-enriched Deep Prompt Tuning for Few-shot Named Entity Recognition(JOURNAL OF LATEX CLASS FILES) [arXiv 2306]
  • ProTeCt: Prompt Tuning for Hierarchical Consistency(Department of Electrical and Computer Engineering University of California)[arXiv 2306]
  • LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding(Georgia Tech)[arXiv 2306]
  • NPT: Bridging the Gap: Neural Collapse Inspired Prompt Tuning for Generalization under Class Imbalance(Zhejiang University)[arXiv 2306]
  • SADA: Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution Alignment(Beihang University)arXiv 2305]Pytorch]
  • ?: Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization(Meta AI) [arXiv 2305] [Pytorch]
  • BSL: Black-box Prompt Tuning with Subspace Learning (Tsinghua University) [arXiv 2305]
  • PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer (University of Maryland) [arXiv 2305]
  • Instruction-ViT: Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT (University of Electronic Science and Technology of China). [arXiv 2305]
  • VPGTrans: Transfer Visual Prompt Generator across LLMs (NUS). [arXiv 2305][PyTorch][Website]
  • DRPT: DRPT: Disentangled and Recurrent Prompt Tuning for Compositional Zero-Shot Learning (Hong Kong Polytechnic University). [arXiv 2305][Code (in construction)]
  • VCoT: Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings (UCSB). [arXiv 2305]
  • PMPO: Multi-Prompt with Depth Partitioned Cross-Modal Learning (CAS). [arXiv 2305]
  • Aurora: Mode Approximation Makes Good Vision-Language Prompts (Peking). [2305][PyTorch]
  • DSD: Discriminative Diffusion Models as Few-shot Vision and Language Learners (Google). [arXiv 2305]
  • APE: Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement(City University of Hong Kong) [arXiv 2304] [Pytorch]
  • XSGD: Efficiently Aligned Cross-Lingual Transfer Learning for Conversational Tasks using Prompt-Tuning (Salesforce AI) [arXiv 2304]
  • SAQI: Towards Rebust Text-Prompted Semantic Criterion for In-the-Wild Video Quality Assessment(IEEE) [arXiv 2304] [Pytorch]
  • D2CSE: Difference-aware Deep continuous prompts for Contrastive Sentence Embeddings(Samsung SDS) [arXiv 2304]
  • IDPT: Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models(Tsinghua University) [arXiv 2304] [Pytorch]
  • AutoSplice: A Text-prompt Manipulated Image Dataset for Media Forensics(University at Buffalo) [arXiv 2304] [Pytorch]
  • POMP: Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition (Amazon). [arXiv 2304][PyTorch]
  • ?: What does CLIP know about a red circle? Visual prompt engineering for VLMs (Oxford). [arXiv 2304]
  • Robust-ProL: Towards Robust Prompts on Vision-Language Models (Google). [arXiv 2304]
  • ProVP: Progressive Visual Prompt Learning with Contrastive Feature Re-formation (vivo, China). [arXiv 2304]
  • ?: Chain of Thought Prompt Tuning in Vision Language Models (Peking University). [arXiv 2304]
  • LION:LION: Implicit Vision Prompt Tuning(Peking University)[arXiv 2303]
  • SeMap: From Visual Prompt Learning to Zero-Shot Transfer: Mapping Is All You Need (CISPA, Germany). [arXiv 2303]
  • R-Tuning: R-Tuning: Regularized Prompt Tuning in Open-Set Scenarios (Shanghai Jiao Tong). [arXiv 2303]
  • VPTM: Rethinking Visual Prompt Learning as Masked Visual Token Modeling (Shanghai Jiao Tong). [arXiv 2303]
  • GRAM: Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models (Huawei). [arXiv 2303]
  • PBPrompt: Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models (Xidian University). [arXiv 2303]
  • CTP-TFT: Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models (Baidu). [arXiv 2303]
  • ZPE: A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models (Google). [arXiv 2302]
  • ?: Task Bias in Vision-Language Models (Columbia). [arXiv 2212]
  • ?:CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet(University of Science and Technology of China)[arXiv 2212][PyTorch]
  • ?:Unleashing the Power of Visual Prompting At the Pixel Level(Shanghai Jiao Tong University)[arXiv 2212][Code]
  • TaskRes: Task Residual for Tuning Vision-Language Models (NUS). [arXiv 2211][Code (in construction)]
  • MVLPT: Multitask Vision-Language Prompt Tuning (Berkeley). [arXiv 2211][PyTorch]
  • TaI-DP: Texts as Images in Prompt Tuning for Multi-Label Image Recognition (Tomorrow Advancing Life (TAL)). [arXiv 2211][PyTorch]
  • ?:Bayesian Prompt Learning for Image-Language Model Generalization(University of Amsterdam)[arXiv 2210]
  • PGN: Prompt Generation Networks for Efficient Adaptation of Frozen Vision Transformers (University of Amsterdam). [arXiv 2210][PyTorch]
  • UPT: Unified Vision and Language Prompt Learning (NTU, Singapore). [arXiv 2210][Code (in construction)]
  • CPL: CPL: Counterfactual Prompt Learning for Vision and Language Models (UC Santa Cruz). [arXiv 2210]
  • PTP: Prompting through Prototype: A Prototype-based Prompt Learning on Pretrained Vision-Language Models (Baidu). [arXiv 2210]
  • LASP: Language-Aware Soft Prompting for Vision & Language Foundation Models (Samsung). [arXiv 2210][Website]
  • VPT: Variational prompt tuning improves generalization of vision-language models (Samsung). [arXiv 2210]
  • DoPrompt:Prompt Vision Transformer for Domain Generalization(National University of Singapore)[arXiv 2208][PyTorch]
  • SoftCPT:Prompt Tuning with Soft Context Sharing for Vision-Language Models(National Laboratory of Pattern Recognition)[arXiv 2208][Code]
  • ?:Prompt-to-Prompt Image Editing with Cross Attention Control(Google Research)[arXiv 2208][Code]
  • DPT:Dual Modality Prompt Tuning for Vision-Language(Chinese Academy of Sciences)[arXiv 2208][PyTorch]
  • CAVPT: Class-Aware Visual Prompt Tuning for Vision-Language Pre-Trained Model (Northwestern Polytechnical University, China). [arXiv 2208][Code]
  • ProGrad:Prompt-aligned Gradient for Prompt Tuning(Nanyang Technological University)[arXiv 2205][PyTorch]
  • UPL:Unsupervised Prompt Learning for Vision-Language Models(Peking University)[arXiv 2204][Code]
  • Visual-Prompting: Exploring Visual Prompts for Adapting Large-Scale Models (MIT). [arXiv 2203][PyTorch][Website]
  • DAPL:Domain Adaptation via Prompt Learning(Tsinghua University)[arXiv 2202][Code]
  • OOD:Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution(Stanford University) [arxiv 2202][PyTorch]
  • AP:Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization(The University of Tokyo)[arXiv 2111]
  • Tip-Adapter:Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling(Shanghai AI Laboratory) [arxiv 2111][PyTorch]
  • DPL:Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains(The University of Tokyo)[arXiv 2111][PyTorch]
  • CLIP-Adapter: Better Vision-Language Models with Feature Adapters (Shanghai AI Lab) [arXiv 2110][PyTorch]

CLIP Variants: Updating by Siyu He

  • VL-T5: Unifying Vision-and-Language Tasks via Text Generation(UNC Chapel Hill).[ICML 2021] [PyTorch]
  • DenseCLIP: DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting(Tsinghua University).[CVPR 2022] [PyTorch]
  • BeamCLIP: Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching (LG). [NeurIPS 2022]
  • UniCLIP: UniCLIP: Unified Framework for Contrastive Language-Image Pre-training(LG AI Research).[NeurIPS 2022]
  • FLIP: Scaling Language-Image Pre-training via Masking(Meta AI).[CVPR 2023] [Pytorch]
  • MaskCLIP: MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining(USTC).[CVPR 2023] [PyTorch]
  • Frozen: Multimodal Few-Shot Learning with Frozen Language Models(DeepMind).[arXiv 2106]
  • CPT: CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models(Tsinghua University).[arXiv 2109] [PyTorch]
  • MAnTiS: Multimodal Conditionality for Natural Language Generation.[arXiv 2109]
  • CAE v2:CAE v2: Context Autoencoder with CLIP Targets(Baidu VIS).[arXiv 2211]
  • A-CLIP: Attentive Mask CLIP(Microsoft Research Asia).[arXiv 2212]
  • EVA-CLIP: EVA-CLIP: Improved Training Techniques for CLIP at Scale(Beijing Academy of Artificial Intelligence).[arXiv 2303] [PyTorch]
  • LaCLIP: Improving CLIP Training with Language Rewrites(Google).[arXiv 2305] [PyTorch]
  • DLIP: DLIP: Distilling Language-Image Pre-training(Xiamen University).[arXiv 2308]
  • ALIP: ALIP: Adaptive Language-Image Pre-training with Synthetic Caption(DeepGlint).[arXiv 2308] [PyTorch]

In-context learning: Updating by Yansheng Gao

  • CPL: CPL: Counterfactual Prompt Learning for Vision and Language Models, arXiv, 2022 (UC Santa Cruz). [Paper]
  • Oscar Multimodal Few-Shot Learning with Frozen Language Models, Nips, 2021 (DeepMind). [paper]
  • HierKD Open-Vocabulary One-Stage Detection with Hierarchical Visual-Language, CVPR, 2022 (MAIS & NLPR). [paper] [Pytorch]
  • Survey What learning algorithm is in-context learning? Investigations with linear models, arxiv, 2022(Google Reasearch). [paper] [Pytorch]

Domain Adaptation + Prompt Learning: Updating by Yongguang Li

  • CTTA: Decorate the Newcomers Visual Domain Prompt for Continual Test Time Adaptation [AAAI 2023]
  • AP:Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain Generalization(The University of Tokyo)[arXiv 2111]
  • DAPL:Domain Adaptation via Prompt Learning(Tsinghua University)[arXiv 2202][Code]
  • MIRO:Domain Generalization by Mutual-Information Regularization with Pre-trained Models(Kakao Brain)[ECCV 2022][Code]
  • DPL:Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains(The University of Tokyo)[arXiv 2111][PyTorch]
  • DOT:Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation(Beijing Institute of Technology)[ACMMM 2022][PyTorch]
  • MPA:Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation(Fudan University)[ICLR 2023]
  • PADA:PADA: Example-based Prompt Learning for on-the-fly Adaptation to Unseen Domains(Technion - Israel Institute of Technology)[TACL 2022][PyTorch]
  • DoPrompt:Prompt Vision Transformer for Domain Generalization(National University of Singapore)[arXiv 2208][PyTorch]
  • DePT:Visual Prompt Tuning for Test-time Domain Adaptation(Rutgers University)[ICLR 2023]
  • IPL:Zero-Shot Generative Model Adaptation via Image-Specific Prompt Learning(Tsinghua University)[CVPR 2023][PyTorch]
  • DomainGen:CLIP the Gap: A Single Domain Generalization Approach for Object Detection(CVLab, EPFL)[CVPR 2023][PyTorch]
  • RADA-prompt:Domain Prompt Tuning via Meta Relabeling for Unsupervised Adversarial Adaptation(Eastern Institute of Technology)[TMM 2023]
  • AD-CLIP:AD-CLIP: Adapting Domains in Prompt Space Using CLIP (Indian Institute of Technology Bombay)[arxiv 2308]
  • PADCLIP : PADCLIP: Pseudo-labeling with Adaptive Debiasing in CLIP for Unsupervised Domain Adaptation[ICCV 2023]
  • PromptStyler : PromptStyler: Prompt-driven Style Generation for Source-free Domain Generalization[ICCV 2023]
  • Open-Set Domain Adaptation with Visual-Language Foundation Models[arXiv 2307]

Contributing & Contact

Feel free to contribute to our repository.

  • If you woulk like to correct mistakes, please do it directly;
  • If you would like to add/update papers, please follow the existing format;
  • If you have any questions or advice, please contact us by email (summitlsf@outlook.com) or GitHub issues.

Thank you for your support!

References

  • Online Resources: