/ReCLIP_WACV

Primary LanguagePythonApache License 2.0Apache-2.0

ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation

Overview

This repository provides the official PyTorch implementation of our WACV 2024 (Oral) Paper ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation

Hardware

We have evaluated our code on NVIDIA A100 GPU with 40GB GPU Memory with batch size of 64. Please use --parallel and smaller batch size for smaller memory GPU.

Environment

We tested our code with PyTorch 1.12.0.

Model Weight

We use CLIP ViT-L/14 as our main base model for adaptation. It is also possible to use other architecture by configing the --architecture option. Our code will automatically download the CLIP checkpoint from link and put it under the ./ckpt folder.

License

ReCLIP is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Citations

@article{xuefeng2023reclip,
  title={ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free Domain Adaptation},
  author={Xuefeng, Hu and Ke, Zhang and Lu, Xia and Albert, Chen and Jiajia, Luo and Yuyin, Sun and Ken, Wang and Nan, Qiao and Xiao, Zeng and Min, Sun and others},
  journal={2024 IEEE winter conference on applications of computer vision (WACV)},
  year={2024},
  organization={IEEE}
}

Acknowledgements

This work is completed during Xuefeng's internship at Amazon.