Website: http://inspiregroup.deptcpanel.princeton.edu/darts/
The code in this repository is associated with the paper DARTS: Deceiving Autonomous Cars with Toxic Signs and its earlier extended abstract Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos , a research project under the INSPIRE group in the Electrical Engineering Department at Princeton University. It is the same code that we used to run the experiments, but excludes some of the run scripts as well as the datasets used. Please download the dataset in pickle format here, or visit the original website for GTSRB and GTSDB datasets.
The main implementation is in ./lib containing:
- utils.py: utility functions
- attacks.py: previously proposed adversarial examples generation methods
- keras_utils.py: define models in Keras
- OptProjTran.py: our optimization code for generating physicall robust adversarial examples
- OptCarlini.py: implementation of Carlini-Wagner's attack
- RandomTransform.py: implementation of random perspective transformation
For specific data/setup we used in our experiments:
- images: contains original images to generate the attacks
- Original_Traffic_Sign_samples: original traffic signs for Adversarial Traffic Signs
- Logo_samples: original logos for Logo Attack
- Custom_Sign_samples: blank signs to be used as background for Custom Sign Attack
- adv_signs: contains some of the adversarial signs we produced, saved in pickle. Organized by types of the attacks: Adversarial_Traffic_Signs, Logo_Attacks, Custom_Sign_Attacks, and Lenticular. Code to read the data is in Run_Robust_Attack.ipynb.
- keras_weights: contains the weight of multiple Keras models we used in the experiment
weights_mltscl_dataaug.hdf5
: multi-scale CNN with data augmentation ("CNN A" in the paper)weights_cnn_dataaug.hdf5
: normal CNN with data augmentation ("CNN B" in the paper)
- For videos of our drive-by test, please visit the website listed above.
The main code we used to run the experiments is in Run_Robust_Attack.ipynb. It demonstrates our procedures and usage of the functions in the library. It also includes code that we used to run most of the experiments from generating the attacks to evaluating them in both virtual and physical settings.
Examples of previously proposed adversarial examples generation methods are listed in GTSRB.ipynb.
Relevant parameters are set in a separate configure file called parameters.py.
Comments and suggestions can be sent to Chawin Sitawarin (chawins@princeton.edu) and Arjun Nitin Bhagoji (abhagoji@princeton.edu).