/iteratively-adversarial-training

iteratively-adversarial-training

Primary LanguagePythonMIT LicenseMIT

iteratively-adversarial-training

This repository is about the implementation of my research idea: Iteratively adversarial training.

Introduction

It is widely known that the adversarial training is useful in defensing adversarial attacks. Therefore, it is natural to ask: whether the adversarial samples generated by adversarially trained models are more powerful in attacking? Moreover, what if we trained from such samples? Will the samples be more powerful and will the models be more robust during the iteratively training?

Use

Start an iteratively adversarially training:

python3 main_iter_adv.py

Start an iteratively "poisonously" training:

python3 main_iter_poison.py

Conclusion

The generated samples has stronger black-box attack power compared with the directly generated samples.

Other results do not show advantages than the direct one.