/Guided_Adversarial_Examples

Official PyTorch implementation of Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning (ICIP 2022)

Primary LanguagePython

Adversarial Examples Guided Imbalanced Learning

This is the official PyTorch implementation of Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning (ICIP 2022)

How to use

We provide several training examples with this repo:

  • To train the ERM baseline (CE loss) on long-tailed imbalance with ratio of 100
python cifar_train.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type CE --train_rule None

image

  • To train the LDAM-DRW loss training on long-tailed imbalance with ratio of 100
python cifar_train.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type LDAM --train_rule DRW

image

  • Use our method to adjust the biased decision boundary
python train_adv.py --gpu 0 --imb_type exp --imb_factor 0.01 --loss_type CE --train_rule None \ 
        --resume checkpoint/cifar10_resnet32_CE_None_exp_0.01_0/ckpt.best.pth.tar --lr 0.001

image

image

Note that we just simply finetune the biased model for several epochs, which is very efficient and effective.

Citation

@article{zhang2022adversarial,
  title={Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning},
  author={Zhang, Jie and Zhang, Lei and Li, Gang and Wu, Chao},
  journal={arXiv preprint arXiv:2201.12356},
  year={2022}
}