Implementation of the paper "Defending Against Model Stealing Attacks with Adaptive Misinformation".
-
conda env create -f environment.yml # Creates Anaconda env with requirements
-
git clone https://github.com/tribhuvanesh/knockoffnets.git" # Download KnockoffNets repository
-
export PYTHONPATH="$PYTHONPATH:/knockoffnets:/adaptivemisinformation" # Add KnockoffNets and AdaptiveMisinformation to PYTHONPATH; Replace with the path containing knockoffnets/adaptivemisinformation dirs
python admis/defender/train.py MNIST lenet -o models/defender/mnist -e 20 --lr 0.1 --lr-step 10 --log-interval 200 -b 128 --defense=SM --oe_lamb 1 -doe KMNIST
python admis/benign_user/test.py MNIST models/defender/mnist --defense SM --defense_levels 0.99
python admis/adv_user/transfer.py models/defender/mnist --out_dir models/adv_user/mnist --budget 50000 --queryset EMNISTLetters --defense SM --defense_levels 0.99
python ./admis/adv_user/train_knockoff.py models/adv_user/mnist lenet MNIST --budgets 50000 --batch-size 128 --log-interval 200 --epochs 20 --lr 0.1 --lr-step 10 --defense SM --defense_level 0.99
python admis/adv_user/train_jbda.py ./models/defender/mnist/ ./models/adv_user/mnist/ lenet MNIST --defense=SM --aug_rounds=6 --epochs=10 --substitute_init_size=150 --defense_level=0.99 --lr 0.01
Note:
-
'--defense_levels' refers to the values of tau in the context of Selective Misinformation.
-
Varying the value of --defense_levels can be used to obtain the defender accuracy vs clone accuracy trade-off curve
Parts of this repository have been adapted from https://github.com/tribhuvanesh/knockoffnets