/FP-Better

Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)

Primary LanguagePython

Fast Propagation is Better: Accelerating Single-Step
Adversarial Training via Sampling Subnetworks (TIFS2024)

[Project Page] | [arXiv]  

Introduction

Adversarial example generation of the proposed FGSM-SDI

Overview of our FP-Better.

In this work, we propose to exploit the interior building blocks of the model to improve efficiency. Specifically, we propose to dynamically sample lightweight subnetworks as a surrogate model during training. By doing this, both the forward and backward passes can be accelerated for efficient adversarial training. Besides, we provide theoretical analysis to show the model robustness can be improved by the single-step adversarial training with sampled subnetworks. Furthermore, we propose a novel sampling strategy where the sampling varies from layer to layer and from iteration to iteration. Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness. Evaluations on a series of popular datasets demonstrate the effectiveness of the proposed FB-Better.

Requirements

  • Platform: Linux
  • Hardware: 3090
  • pytorch, etc.

Train

python3.6 FGSM_DSD_cifar10.py  --out_dir ./output/ --data-dir cifar-data

Test

python3.6 test_cifar10.py --model_path model.pth --out_dir ./output/ --data-dir cifar-data

Trained Models

The Trained models can be downloaded from the Baidu Cloud(Extraction: 1234) or the Google Drive