/ECGadv

Code for "ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System"

Primary LanguagePythonMIT LicenseMIT

ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

These codes are for paper "ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System".

Installing

This can be done on Linux using

pip install https://github.com/mind/wheels/releases/download/tf1.8-cpu/tensorflow-1.8.0-cp36-cp36m-linux_x86_64.whl
pip install keras
pip install cleverhans

Prerequisites

Download the following items:

wget https://physionet.org/challenge/2017/training2017.zip
unzip training2017.zip
wget https://physionet.org/challenge/2017/REFERENCE-v3.csv
wget https://github.com/fernandoandreotti/cinc-challenge2017/blob/master/deeplearn-approach/ResNet_30s_34lay_16conv.hdf5

File Description

PrepareAttackDataset.py: apply the targeted model on the dataset, record the ones predicted the same as in label file, i.e., correct prediction.

prediction_correct.csv: output file of PrepareAttackDataset.py, includes all correct predictions.

data_select_A.csv, data_select_N.csv, data_select_O.csv, data_select_i.csv: output file of PrepareAttackDataset.py, includes the correct predictions of class A, N, O, ~ respectively.

For attacks against cloud deployment model:

  • myattacks_l2.py & myattacks_tf_l2.py: Attack func with similarity metric dl2

  • myattacks_diff.py & myattacks_tf_diff.py: Attack func with similarity metric dsmooth

  • myattacks_diffl2.py & myattacks_tf_diffl2.py: Attack func with similarity metric dsmooth,l2

  • cloud_eval_l2.py: Generate attack perturbation by calling myattacks_l2.py, and save it in "./cloud_model/l2_eval/"

  • cloud_eval_diff.py: Generate attack perturbation by calling myattacks_diff.py, and save it in "./cloud_model/smooth_eval/"

  • cloud_eval_diffl2.py: Generate attack perturbation by calling myattacks_diffl2.py, and save it in "./cloud_model/l2smooth_0_01_eval/"

Example run:

python attack_file index_file start_idx end_idx
  • attack_file: can be "cloud_eval_l2.py", "cloud_eval_diff.py", "cloud_eval_diffl2.py"
  • index_file: can be "data_select_A.csv", "data_select_N.csv", "data_select_O.csv", "data_select_i.csv"
  • start_idx: integer, at least 1
  • end_idx: integer, not included

./cloud_model/metric_compare.py: given idx, TRUTH and TARGET, plot a figure including the original sample and three adversarial ones.

For attacks against local deployment model:

  • LDM_EOT.py & LDM_EOT_tf.py: Attack func with EOT
  • LDM_Attack.py: Generate attack perturbation for Local Deployment Model and save it in "./output/$GroundTruth/"

Example run:

python LDM_Attack.py sample_idx target window_size
  • sample_idx - Sample index for attack
  • target - Target class. 0,1,2,3 represents A, N, O, ~ respectively
  • window_size - Perturbation window size

*LDM_UniversalEval.py: Test perturbation generated from LDM_Attack.py. The program will load the perturbation in "./output/$GroundTruth/ and test it via adding it to all the samples in data_select_?.csv with the targeted class.

Example run:

python LDM_UniversalEval.py perturb_idx target window_size
  • perturb_idx - The index of samples that generate the perturbation.
  • target - Target class. 0,1,2,3 represents A, N, O, ~ respectively
  • window_size - Integer, perturbation window size.

To demostrate the universality of the attack, the program will test all the samples in data_select_$Target.csv that belong to Target.

License

This project is licensed under the MIT License - see the LICENSE file for details