/Fast-Approximate-Spectral-Norm-Regularization-for-Enhancing-Robustness-of-DNNs

Deep neural networks (DNNs) are recently playing an important role in machine learning fields due to its outstanding performance compared with traditional approaches. However, DNNs are vulnerable to adversarial attacks and can easily be fooled by well crafted adversarial examples. Thus, DNNs will definitely bring severe security risks if deployed in fields requiring high reliability. Spectral norm regularization is a regularization method that can ensure the trained model to possess relatively low sensitivity towards the disturbance of input samples, which makes it an appealing strategy for enhancing models' robustness. However, the time cost for exact spectral norm computation is extremely expensive and impractical for large-scale networks. In this paper, we introduce a new framework for spectral norm regularization based on Fourier method and layer separation. The key insight underlying our work is it nicely combines the sparsity of weight matrix and decomposability of convolution layers. Our experimental evaluations provide persuasive evidence to show our framework is able to achieve extremely fast time efficiency and better enhanced model robustness compared with the baseline method.

Primary LanguagePython

Fast Approximate Spectral Norm Regularization for Enhancing Robustness of DNNs

This is a pytorch implemention of the fast spectral norm regularization algorithm proposed in the paper Fast Approximate Spectral Norm Regularization for Enhancing Robustness of DNNs.

Usage

The 'GPU_version.py' will compare our fast spectral norm regularization algorithm with the newest algorithm. You can modify row 302 to switch loss among none regularizer (loss = loss), our regularizer (loss = loss + loss_my_conv) and newest regularizer (loss = loss + loss_old_conv).

Dataset

Please get the data from here.

Test Result