This is the official code release for the paper 'TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling'.
Tip-Adapter provides faster convergence and better performance than CLIP-Adapter by initializing the adapter with a cache model.
Put tip_adapter_ImageNet.py
into clip's folder and run
python tip_adapter_ImageNet.py
you will get 65.51% on ImageNet validation set.
This repo will be completed in a few days.
Peng Gao, Renrui Zhang
CLIP, CoOp and CLIP-Adapter