Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Compositional Understanding
🎉 The paper was accepted to CVPR 2024:
TL;DR: We Propose two losses on our generated hard negative examples to enhance model's compositional understanding ability for CLIP.
This repo forks from wonderful OpenCLIP, for model and training details, please refer to original repo.
The checkpoints could be downloaded directly using gdown with following script:
pip install --upgrade --no-cache-dir gdown # must update gdown to avoid bugs, thanks to https://github.com/wkentaro/gdown/issues/146
gdown 1DWPw3CtGh5cHz9bW_-iXRSG7BBUVl13K #download checkpoint for CE-CLIP
The training data is generated based on COCO 2014, so you can either download by yourself and assign coco dataset_path
in dataset.py
or you can simply run following script to download and generate dataset
cd data/
bash prepare_dataset.sh
you need to specify training parameters in scrips/run_all.sh such as --gres=gpu:a100:2
and batch_size
, please refer to this script file to see more details, to simply run the training, using following scritps
cd scripts/
bash run_multiple_nodes.sh
The result checkpoint will be at Enhance-FineGrained/src/Outputs
We evaluate our method on four downstream task ARO, VALSE and VL-CheckList, and very recent SugarCrepe and we also provide evaluation code. However, one need go to official github page to download dataset to evaluate on them.
Evaluation code for ARO is included in Enhance-FineGrained/vision-language-models-are-bows
, to reproduce results, you need
-
set up environment by running
bash Enhance-FineGrained/vision-language-models-are-bows/scripts/create_environment.sh
-
cd Enhance-FineGrained/vision-language-models-are-bows/scripts
and change the checkpoint path inreproduce_aro.sh
, then run the script to reproduce the results. Note that dataset will be download automatically
-
Evaluation code for VALSE is included in
Enhance-FineGrained/VALSE
, to reproduce results on valse, please download dataset here first. Then replace dataset path inEnhance-FineGrained/VALSE/clip_valse_eval.py
Enhance-FineGrained/VALSE/xvlm_valse_eval.py
-
replace
$checkpoint
inEnhance-FineGrained/VALSE/scripts
then run the scripts, evaluation results will be included in/home/mila/l/le.zhang/scratch/Enhance-FineGrained/VALSE/output
❗ Note: The original dataset is not complete, we encourage skip this dataset
Please refer to official github repo to download dataset and perform evaluation. Note that Downloading the dataset can be quite cumbersome
we provide script at here
SugarCrepe is a benchmark for faithful vision-language compositionality evaluation. This dataset fix a several biases in all above benchmarks rendering them hackable that blind models with no access to the image outperform state-of-the-art vision-language models.
to evaluate on this dataset, simply clone their repo and follow their installation setup, and assign retrained to our checkpoints
python main_eval.py --model ViT-B-32 --pretrained Enhance-FineGrained/clip/epoch_5.pt \
--output ./output \
--coco_image_root ./data/coco/images/val2017/ \
--data_root ./data/ \
Our method entails curriculum learning, which is validated by the growth of adaptive threshold
@article{zhang2023contrasting,
title={Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding},
author={Zhang, Le and Awal, Rabiul and Agrawal, Aishwarya},
journal={arXiv preprint arXiv:2306.08832},
year={2023}
}
please let us know if you have further questions or comments, reach out to le.zhang@mila.quebec