/IAD

[ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers

Primary LanguagePythonMIT LicenseMIT

Reliable Adversarial Distillation with Unreliable Teachers

Code for ICLR 2022 "Reliable Adversarial Distillation with Unreliable Teachers"

by Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang.

Full code and instructions will be completed soon.

Introduction

In this work, we found the soft-labels provided by the teacher model gradually becomes less and less reliable during the adversarial training of student model. Based on that, we propose to partially trust the soft labels provided by the teacher model in adversarial distillation.

Environment

  • Python (3.7.10)
  • Pytorch (1.7.1)
  • torchvision (0.8.2)
  • CUDA
  • Numpy
  • advtorch

Content

  • ./models: models used for pre-train and distillation.
  • ./pre_train: code for AT and ST.
  • IAD-I.py: Introspective Adversarial Distillation based on ARD.
  • IAD-II.py: Introspective Adversarial Distillation based on AKD2.

Usage

Pre-train

  • AT
cd ./pre_train
CUDA_VISIBLE_DEVICES='0' python AT.py --out-dir INSERT-YOUR-OUTPUT-PATH
  • ST
cd ./pre_train
CUDA_VISIBLE_DEVICES='0' python ST.py --out-dir INSERT-YOUR-OUTPUT-PATH

Distillation

  • IAD-I
CUDA_VISIBLE_DEVICES='0' python IAD-I.py --teacher_path INSERT-YOUR-TEACHER-PATH --out-dir INSERT-YOUR-OUTPUT-PATH
  • IAD-II
CUDA_VISIBLE_DEVICES='0' python IAD-II.py --teacher_path INSERT-YOUR-TEACHER-PATH --out-dir INSERT-YOUR-OUTPUT-PATH

Evaluation

  • basic eval
CUDA_VISIBLE_DEVICES='0' python basic_eval.py --model_path INSERT-YOUR-MODEL-PATH

Citation

@inproceedings{zhu2022reliable,
title={Reliable Adversarial Distillation with Unreliable Teachers},
author={Jianing Zhu and Jiangchao Yao and Bo Han and Jingfeng Zhang and Tongliang Liu and Gang Niu and Jingren Zhou and Jianliang Xu and Hongxia Yang},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=u6TRGdzhfip}
}

Reference Code

[1] AT: https://github.com/locuslab/robust_overfitting

[2] TRADES: https://github.com/yaodongyu/TRADES/

[3] ARD: https://github.com/goldblum/AdversariallyRobustDistillation

[4] AKD2: https://github.com/VITA-Group/Alleviate-Robust-Overfitting

[5] GAIRAT: https://github.com/zjfheart/Geometry-aware-Instance-reweighted-Adversarial-Training

Contact

Please contact csjnzhu@comp.hkbu.edu.hk if you have any question on the codes.