This is the official implementation of the experiments from the paper "Adversarial Framing for Image and Video Classification" (video) by Michał Zając, Konrad Żołna, Negar Rostamzadeh and Pedro Pinheiro.
The code from the paper "Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?"
is also included in deps/resnets_3d
folder, as we attack the model from that paper.
Our code was originally forked from Classifier-agnostic saliency map extraction repository.
The code uses Python 3 and packages listed in requirements.txt
. If you use pip, you can install them by pip install -r requirements.txt
.
- Follow the instructions to download and unpack the dataset.
- Set environment variable
IMAGENET_DATA_DIR
with the directory to the dataset byexport IMAGENET_DATA_DIR=/your/imagenet/dir
, where/your/imagenet/dir
should containtrain
andval
folders (as in the instructions above).
- Follow the instructions to download, unpack and preprocess the dataset.
- Set data environment variables
UCF101_DATA_DIR
andUCF101_ANNOTATION_PATH
.
export UCF101_DATA_DIR=/your/data/dir
, where/your/data/dir
isjpg_video_directory
from the instruction above.export UCF101_ANNOTATION_PATH=/your/annotation/path
, where/your/annotation/path
is a path to the fileucf101_01.json
created with the above instruction.
- Download a pretrained model called
resnext-101-kinetics-ucf101_split1.pth
from here. The model comes from the paper Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet? - Set model environment variable
UCF101_MODEL
byexport UCF101_MODEL=/your/model/path
.
- First run
export PYTHONPATH=$PYTHONPATH:deps
from the main project directory. - To reproduce untargeted ImageNet experiments, run
python3 main.py --dataset imagenet --width $WIDTH --epochs 5 --lr 0.1 --lr-decay-wait 2 --lr-decay-coefficient 0.1
, where you should setWIDTH
of the framing. - To reproduce untargeted UCF101 experiments, run
python3 main.py --dataset ucf101 --width $WIDTH --epochs 60 --lr 0.03 --lr-decay-wait 15 --lr-decay-coefficient 0.3
, where you should setWIDTH
of the framing. - To draw some examples of attacks on ImageNet, run
python3 draw_examples_imagenet.py --framing $CHECKPOINT
. As aCHECKPOINT
you can use some model frompretrained
directory.
If you found this code useful, please use the following citation:
@paper{zajac2019framing,
title={Adversarial Framing for Image and Video Classification},
author={Zaj\k{a}c, Micha\l{} and \.Zo\l{}na, Konrad and Rostamzadeh, Negar and Pinheiro, Pedro},
conference={AAAI Conference on Artificial Intelligence},
year={2019}
}