/ImageShortcutSqueezing

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

Primary LanguagePython

This repository contains the code of our paper "Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression".

Overview

Our paper verifies that 12 state-of-the-art Perturbative Availability Poisoning (PAP) methods are vulnerable to Image Shortcut Squeezing (ISS), which is based on simple compression (i.e., grayscale and JPEG compressions). For example, on average, ISS restores the CIFAR-10 model accuracy to 81.73%, surpassing the previous best preprocessing-based countermeasures by 37.97% absolute. We hope that further studies could consider various (simple) countermeasures during the development of new poisoning methods.

Categorization of existing poisoning methods

We carry out a systematic analysis of compression-based countermeasures for PAP. We identify the strong dependency of the perturbation frequency patterns on the surrogate model property: perturbations that are generated on slightly-trained surrogates exhibit spatially low-frequency patterns, while poisons that are generated on fully-trained surrogates exhibit spatially high-frequency patterns, as shown in the figure below.

examples

Evaluation results of ISS against 12 existing PAP methods.

Poisons \ ISS w/o Grayscale JPEG-10
Clean (no poison) 94.68 92.41 85.38
Deep Confuse $(L_{\infty} = 8)$ 16.30 93.07 81.84
NTGA $(L_{\infty} = 8)$ 42.46 74.32 69.49
EM $(L_{\infty} = 8)$ 21.05 93.01 81.50
REM $(L_{\infty} = 8)$ 25.44 92.84 81.50
ShortcutGen $(L_{\infty} = 8)$ 33.05 86.42 79.49
TensorClog $(L_{\infty} = 8)$ 88.70 79.75 85.29
Hypocritical $(L_{\infty} = 8)$ 71.54 61.86 85.45
TAP $(L_{\infty} = 8)$ 8.17 9.11 83.87
SEP $(L_{\infty} = 8)$ 3.85 3.57 84.37
LSP $(L_{2} = 1.0)$ 19.07 82.47 83.01
AR $(L_{2} = 1.0)$ 13.28 34.04 85.15
OPS $(L_{0} = 1)$ 36.55 42.44 82.53

How to apply ISS on poisons?

Prepare poisoned images as .png files in folder PATH/TO/POISON_FOLDER following the order of original CIFAR-10 dataset.

To train on grayscaled poisons:

python main.py --exp_type $TYPEOFPOISONS --poison_path PATH/TO/POISON_FOLDER --poison_rate 1 --net resnet18 --grayscale True --exp_path PATH/TO/SAVE/RESULTS/

To train on JPEG compressed poisons:

python main.py --exp_type $TYPEOFPOISONS --poison_path PATH/TO/POISON_FOLDER --poison_rate 1 --net resnet18 --jpeg 10 --exp_path PATH/TO/SAVE/RESULTS/

An example for quick start:

We provide an example of applying ISS on CIFAR-10 poisons generated by Targeted Adversarial Poisoning (TAP). Poisons are generated by using the official TAP GitHub repository. Poisoned images by TAP are included in data/TAP/.

bash train.sh, will start to train on TAP poisons with JPEG-10 and results can be found in experiments/TAP/jpeg10/

Classification performance when training solely on TAP poisons can be checked by running:

python main.py --poison_type TAP --exp_path ./experiments/TAP/TAP_poisoned --poison_path ./data/TAP/

Cite our work:

Please cite our paper if you use this implementation in your research.

@misc{liu2023image,
      title={Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression}, 
      author={Zhuoran Liu and Zhengyu Zhao and Martha Larson},
      year={2023},
      eprint={2301.13838},
      archivePrefix={arXiv}
}

Acknowledgement:

Training code adapted from kuangliu's repository Train CIFAR10 with PyTorch