git clone https://github.com/Sadcardation/MLLM-Refusal.git
cd MLLM-Refusal
conda env create -f environment.yml
conda activate mllm_refusal
Check the datasets from the following links:
- CelebA: Download Link (Validation)
- GQA: Download Link (Test Balanced)
- TextVQA: Download Link (Test)
- VQAv2: Download Link (Validation)
Download the datasets and place them in the datasets
directory. The directory structure should look like this:
MLLM-Refusal
└── datasets
├── CelebA
│ ├── Images
│ │ ├── 166872.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
├── GQA
│ ├── Images
│ │ ├── n179334.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
├── TextVQA
│ ├── Images
│ │ ├── 6a45a745afb68f73.jpg
│ │ └── ...
│ ├── sampled_data_100.xlsx
│ └── similar_questions.json
└── VQAv2
├── Images
│ └── mscoco
│ └── val2014
│ ├── COCO_val2014_000000000042.jpg
│ └── ...
├── sampled_data_100.xlsx
└── similar_questions.json
sampled_data_100.xlsx
contains the 100 sampled image-question for each dataset. similar_questions.json
contains the similar questions for each questions in the sampled data.
Clone the MLLM repositories and place them in the models
directory, and follow the install instructions for each MLLM. Include corresponding utils
directory in each MLLM's directory.
-
Additional instructions:
- Add
below here to replace original vision encoder
config.mm_vision_tower = "openai/clip-vit-large-patch14"
openai/clip-vit-large-patch14-336
LLaVA uses to unify resolutions of perturbed images between different MLLMs.
- Add
-
Additional instructions:
-
Add
if kwargs: kwargs['visual']['image_size'] = 224
below here to unify resolutions of perturbed images between different MLLMs.
-
Add
image_emb = None,
as addtional argument for forward function of QWenModel, and replace this line of code with
images = image_emb if image_emb is not None else self.visual.encode(images)
so that image embeddings can directly be passed to the forward function.
-
To produced images with refusal perturbation on 100 sampled images for VQAv2 dataset on LLaVA-1.5 with three different types of shadow questions under default settings, run the following command:
./attack.sh
The results will be saved under LLaVA-1.5's directory.
To evaluate the results, run the following command:
./evaluate.sh
with corresponding MLLM's directory and the name of the result directory. Refusal Rates will be printed on the terminal and saved in the each result directory.
If you find MLLM-Refusal helpful in your research, please consider citing:
@article{shao2024refusing,
title={Refusing Safe Prompts for Multi-modal Large Language Models},
author={Shao, Zedian and Liu, Hongbin and Hu, Yuepeng and Gong, Neil Zhenqiang},
journal={arXiv preprint arXiv:2407.09050},
year={2024}
}