/Source-attack

Combine the source forensic and the adversarial attack. Give a resonable attack and defensive method for this case.

Primary LanguagePython

Source-attack

Combine the source forensic and the adversarial attack. Give a resonable attack and defensive method for this case.

This code is based on the paper 'Adversarial analysis for source camera identifications'

It supports:

  • Multi-tasks classification model
  • Couple-coding
  • Noise retraining for detect adversarial examples generated by proposed adversarial attacks
  • Resonable adversarial attack for source camera identification ...

Requirements

  • Python 2.7+
  • PyTorch 1.3+ (along with torchvision)
  • cuda-10.0

Prepare data.

All experiments are based on the public available Dresdon dataset.

Start training

  1. Train the classification network
  • mkdir Universal
$ python train_couple_Net.py -b 128 --save_dir './Universal' --epochs 21

The trained models will be saved in the --save_dir as checkpoint_{epoch}.tar. 2. Noise retraining

  • mkdir noise_retrain
$ python noise_retrain.py --resume './Universal/checkpoint_19.tar' --save_dir './noise_retrain'

The model will be saved in dir --save_dir as {str(epoch_iter)+''+ --dataset+'.tar'}, such checkpoint{epoch}.tar. One epoch is enough for detecting adversarial examples. 3. Evaluate the noise retrain mdethod performance

$ python attack.py --model './Universal/checkpoint_20.tar'
$ python attack.py --model './noise_retrain/checkpoint_0.tar'
  1. Resonable adversarial attack
  • mkdir Raa
$ python train_auto_learn.py --save_dir './Raa' --epochs 15
$ python test_auto_learn.py --resume './Raa/checkpoint_15.tar' --class_net_path './Universal/checkpoint_19.tar'
$ python test_auto_learn.py --resume './Raa/checkpoint_15.tar' --class_net_path './noise_retrain/checkpoint_0.tar'