Official Implementation of ICLR 2022 paper, Adversarial Unlearning of Backdoors via Implicit Hypergradient [openreview][video] .
We propose a novel minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data:
To solve the minimax problem, we propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm, which utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. I-BAU requires less computation to take effect; particularly, it is more than 13 X faster than the most efficient baseline in the single-target attack setting. It can still remain effective in the extreme case where the defender can some only access 100 clean samples — a setting where all the baselines fail to produce acceptable results .
This code has been tested with Python 3.6, PyTorch 1.8.1 and cuda 10.1.
- Install required packages.
- Get poisoned models prepared in the directory
./checkpoint/
. - We provide two examples on poisoned models trained on GTSRB and CIFAR10 datasets, check
clean_solution_batch_op..._cifar.ipynb
andclean_solution_batch_op..._gtsrb.ipynb
for more details. - For a more flexible usage, run
python defense.py
. An example is as follow:
python defense.py --dataset cifar10 --poi_path './checkpoint/badnets_8_02_ckpt.pth' --optim Adam --lr 0.001 --n_rounds 3 --K 5
Clean data used for backdoor unlearning can be specified with argument --unl_set
; if it is not specified, then a subset of data from testset will be used for unlearning.
- For more information regarding training options, please check the help message:
python defense.py --help
.
If you find our work useful please cite:
@inproceedings{zeng2021adversarial,
title={Adversarial Unlearning of Backdoors via Implicit Hypergradient},
author={Zeng, Yi and Chen, Si and Park, Won and Mao, Zhuoqing and Jin, Ming and Jia, Ruoxi},
booktitle={International Conference on Learning Representations},
year={2021}
}