This repository provides the official PyTorch implementation of our WACV 2024 (Oral) Paper ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation
We have evaluated our code on NVIDIA A100 GPU with 40GB GPU Memory with batch size of 64. Please use --parallel and smaller batch size for smaller memory GPU.
We tested our code with PyTorch 1.12.0.
We use CLIP ViT-L/14 as our main base model for adaptation. It is also possible to use other architecture by configing the --architecture option. Our code will automatically download the CLIP checkpoint from link and put it under the ./ckpt folder.
ReCLIP is released under the Apache 2.0 license. Please see the LICENSE file for more information.
@article{xuefeng2023reclip,
title={ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation},
author={Xuefeng, Hu and Ke, Zhang and Lu, Xia and Albert, Chen and Jiajia, Luo and Yuyin, Sun and Ken, Wang and Nan, Qiao and Xiao, Zeng and Min, Sun and others},
journal={2024 IEEE winter conference on applications of computer vision (WACV)},
year={2024},
organization={IEEE}
}
This work is completed during Xuefeng's internship at Amazon.