/cvpr22w_RobustnessThroughTheLens

Official repository of our submission "Adversarial Robustness through the Lens of Convolutional Filters" for the CVPR2022 Workshop "The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop"

Primary LanguageJupyter NotebookCreative Commons Attribution Share Alike 4.0 InternationalCC-BY-SA-4.0

Adversarial Robustness through the Lens of Convolutional Filters

Paul Gavrikov, Janis Keuper

CC BY-SA 4.0

Presented at: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop

Paper | ArXiv | HQ Poster

This is a specialized article on Robustness, derived from our main paper: https://github.com/paulgavrikov/CNN-Filter-DB/

Abstract: Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3x3 convolution filters that form in adversarially-trained models. Filters are extracted from 71 public models of the linf-RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.

Activation of first stage filters

Data

Download the dataset from here https://zenodo.org/record/6414075.

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Gavrikov_2022a_CVPR,
    author    = {Gavrikov, Paul and Keuper, Janis},
    title     = {Adversarial Robustness Through the Lens of Convolutional Filters},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
    pages     = {139-147}
}

Dataset: DOI

Legal

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.