mmazeika/tdc-starter-kit

Evasive trojan challenge uses incorrect `trojan_type` of `evasive_trojan` instead of `trojan_evasion`

carlini opened this issue · 2 comments

In train_batch_of_models.py we see the following line:

training_kwargs['num_epochs'] = 20 if args.trojan_type == 'evasive_trojan' else 10

but then further on down we see the other condition on the argument args.trojan_type

elif args.trojan_type == 'trojan_evasion': # evasive Trojans baseline
training_function = utils.train_trojan_evasion
training_kwargs['attack_specification'] = attack_specifications[model_idx]
training_kwargs['trojan_batch_size'] = args.trojan_batch_size
# assumes clean models used for initializing the evasive Trojan baseline are in ./models/clean_init
clean_model_paths = [os.path.join('./models', 'clean_init', x, 'model.pt') \
for x in sorted(os.listdir(os.path.join('./models', 'clean_init')))]
training_kwargs['clean_model_path'] = clean_model_paths[model_idx]
else:
raise ValueError('Unsupported trojan_type')

Therefore, args.trojan_type can never be evasive_trojan (or else we will ValueError here) and so training_kwargs['num_epochs'] will always be 10. Hopefully this will not cause you any issues in your pretrained models.

Hello,

Thank you for pointing this out! In early experiments, we found that using 20 epochs worked better for evasive Trojans on MNIST, but there have been several major changes to the baseline code since then, so we will just standardize it to always use 10 epochs (so this was a serendipitous bug :) ).

This did not affect the pretrained models in the evaluation server or the training data, so you should be able to make submissions now.

All the best,
Mantas

Thank you!