YyzHarry/ME-Net

关于这篇文章的诸多问题

HaoerSlayer opened this issue · 10 comments

朋友你好,几个月前我读到了这篇文章并尝试做一些改良,然而最近我突然意识到我甚至没有复现出原文中的结果。打个比方来说,当时我尝试了一些cifar10的白盒攻击,我得到的模型在7步PGD下的acc是57.6,这个数据比较接近表11中的59.8因此当时我就以为复现成功了。但是近日我无意中发现该模型在面对20步PGD时远没有达到表11中的52.6而是仅有42.1,于是我尝试了MNIST、CIFAR10、SVHN三种数据集上的白盒攻击,发现结果于论文中提供的数据出入非常大,现在我把它们列在这里请您帮助我排查问题:
MNIST usvt p=0.3 超参数与表8一致,在白盒攻击下我的测试结果是:clean 96.6 PGD40 87.85 PGD100 81.57 (文章里对应表16的96.8 86.5 83.1),这一组还勉强比较一致。
Cifar10 usvt p=0.5,这个模型我没有用自己训练的而是采用你提供的checkpoint,我得到的测试结果是:
clean87.18 pgd7 57.60 pgd20 42.1 (文章里对应表11的 - pgd7 59.8 pgd20 52.6),pgd20远远达不到文中所述,这个问题比较大而且攻击代码和模型都是您提供的,所以我比较困惑.
SVHN usvt p=0.3 超参与表8一致,我的测试结果是clean 87.48 pgd7 78.00 pgd20 73.88 (对应表19的 clean88.3 pgd7 74.7 pgd20 61.4)同样在pgd20有很大的不同,更高迭代次数的攻击我没有对整个测试集进行评估因为比较费时,但是从随机选取的一小部分子集的表现来看,cifar的表现会更低而svhn会收敛在70多.
以上是我做的并与文章很不一样的地方,希望能得到你的回复,尤其是关于cifar的结果,几乎没有一点我自己的东西,模型和攻击都是您已经提供的,出现如此大的偏差实在非常奇怪。此外还有一些比较疑惑的地方希望您也一起解答了:
1.表5中adaptive attack的攻击效果甚至比不上普通的bpda,这显然是不合理的adaptive attack吧?
2.既然已经知道bpda攻击那想必您一定也知道eot的存在吧,为什么不做相关的评估呢,menet中的mask很显然引入了随机
3.表6和表7中的数据显示menet能提高对clean data的泛化性,然而我并没有观察到这一现象,事实上您所提供的checkpoint(cifar pure usvt 0.5)中acc似乎也只有80多根本没有94.9,我猜测这里或许是指top-5?但是文中并没有指明这一点。

关于第三个问题的答案我想我知道了,您在讨论提高clean data的泛化性时巧妙地采用了p=0.9时的模型,与带来显著鲁棒性提升的p较小的模型并不一致,也就是要追求更好的防御效果其实就没什么Improving Generalization,是这样吗?

Results of purely trained ME-Net

  • First of all, we have made it explicit that unless otherwise specified, we use nuclear norm minimization for ME (refer to “Experimental Setup” in Section 3). Therefore, when referring to the numbers and making fair/thorough evaluation, this setting should be considered and focused. While USVT is fast, it is well known that USVT doesn’t perform well in practice for ME. It is hard to select the threshold.
  • We are sorry that the pre-trained models provided are not the best checkpoints we have. Unfortunately our storage was erased some time ago, and these models are the few we found in our local machine. However, our repo does contain complete code that enables reproducing the results.
  • For CIFAR, also note that there is a tradeoff between robustness & accuracy. Lower p usually leads to better robustness. The tradeoff has been explicitly studied in Appendix. We have a model with p=0.3, which you can try also with. You can also try to run PGD until convergence (e.g., > 100) to obtain the actual robustness which is of more interest.
  • Again for SVHN, we did not tune the parameter much. It is possible that different concatenation methods can lead to better results.

Adaptive attacks

We believe these proposed adaptive attacks make perfect sense to us. Note that our paper argues the low-rank structure within images. Those attacks are made to attack the structural space of the inputs. The designs of adaptive attacks are independent of their performances. We are welcome if you have any ideas on new attacks.

About randomness & EOT

  • We agree with the concern. In fact, randomness was considered. In Appendix we study the effect of random restarts (up to 50 restarts). We indeed caution the readers that “since ME-Net leverages randomness through masking, and it would be helpful to understand how random restarts, with a hard success criterion, affect the overall pipeline.”
    However, we believe “arguably, one could potentially always handle such drawbacks by introducing restarts during training as well, so as to maximally match the training and testing conditions.” This is the reason that we did not heavily focus on randomness.
  • Regarding EOT, as mentioned above, we didn’t include further study on randomness because of our belief stated in the paper. Even if it can be applied to attack our method, we can also incorporate the EOT process when training ME-Net (after all, this is what adversarial training such as PGD does). As we consider the possible attack for randomness, by adding EOT into our training, the model can then see more powerful adversarial examples, and further enhance the robustness. Considering this and since it would incur significant computational cost, adding EOT becomes somewhat tricky from our perspective.

Generalization on clean data

Your understanding is roughly correct. When talking about generalization on clean data, we treat ME-Net as a data augmentation technique. There is always the tradeoff between robustness & accuracy, but adding ME-Net should improve the generalization both on clean and adversarially trained models (e.g., see clean acc. in Table 13 with different p).

感谢您的回复!大部分的情况和我想的差不多,比如论文中的数据是使用nucnorm的策略得到的,但是因为差的实在很多而我的机器又不允许我做过于耗时的验证所以才来问的。总之,很棒的工作。

你好朋友,我有一个关于代码的新问题,就是我发现foolbox实现的pgd要比你自己写的攻击效果弱一些,导致准确率的提高。我注意到这可能是
if adversarial is None:
adversarial = inputs.astype(np.float32)
这段语句导致的,在评估时有很多输入实际上并没有施加扰动,而是以原始样本输入。不知道你是否也遇到了同样的情况,如果是的话,是否有什么办法能让两种方法保持一致呢?或者能否简要说明一下adversarial is None的情况是如何发生的呢?

另外的新问题...我记得boundary attack会在开始时就会选择一个会导致错误分类的图像作为起点,然后优化的目的只是为了缩小与原始图片的距离,这个过程中分类结果是否是不变的(对于vanilla的model)?我不明白为啥没有任何防御的模型会在boundary攻击下仍有正确率,这个问题应该和menet无关但是我想不清楚。另外boundary和spsa是否有办法测试的快一点,迭代1000次要花费几分钟才生成一张对抗样本。
提前多谢大佬指点。

朋友你好,可以给一下vanilla method下CW攻击的参数设置吗,me method下确实可以达到80+的acc,但是在同样的参数设置下vanilla的acc也很高,意味着CW攻击实际并没有成功。我试着调大了CW攻击的learning rate,发现acc会有显著的下降,但还是达不到个位数,我想我需要更准确的参数设置来达到或接近如表9中的9.3%和8.9%.

Glad to see my replies address your previous concerns. :)

For the new questions:

  • Foolbox: One reason for that might be the original input is already classified incorrectly, thus no point to generate adversarial example. For white-box PGD I would suggest using the implementation in this repo.
  • Boundary & SPSA: Agree that the computation cost is very high. We intend to make the Boundary / SPSA attacks strong enough, so we choose large iter / batch-size values, which is also recommended in original papers [1, 2]. You might try smaller values for faster computation, which might trading-off the strengths of adv examples.
  • CW: The parameters may need some tuning. Currently I cannot find related configs, but I recommend to refer to the parameters in this repo and original CW repo.

I'm closing this issue now, as the concerns have been addressed. Feel free to leave comments if any.

感谢您的回复,朋友,对于之前我说的一些问题,在你没有回复的时候我自己试着找了一些解决方法,但是我英文不是很好所以在找资料的时候很吃力不太确定我这样做是否正确,所以说一下我的尝试希望你可以提一下意见.

对于foolbox的acc略高问题,我注意到foolbox并非每次迭代都更新对抗样本,所以我修改了foolbox.adversarial.py,在prediction函数中加了一句self.__best_adversarial=image,我感觉这样和自己实现的pgd更接近了,每次预测都修改样本,更符合常见的对pgd的理解.

对于CW,madry的那篇文章里迭代次数才30,而carlini的l2attack要迭代10000次,我想这其实是两种攻击,madry做的是pgd on cw loss,似乎不应该直接拿foolbox的cwl2做,而是应该用pgd但是不要使用交叉熵作为损失函数,这一点我是在foolbox.models.pytorch.py中把交叉熵改成
mask=torch.zeros_like(predictions)
mask[0,target.item()]=1
mask.to(predictions.device)
loss=-torch.relu(torch.sum(mask*predictions)-torch.max((1-mask)predictions-1e4mask)+20)
实现的,鉴于你表格中madry的数据直接用的对抗训练那篇文章里的,我觉得关于menet的实验也用
pgd on cw loss做比较统一,比较起来更有说服力.当然由于我英文问题,我也不确定这样是否在某个环节曲解了别人的意思,如果您觉得这样是有问题的请及时指出.

最后对于boundary attack,一个问题还没有得到您的回复,就是我不明白vanilla method在boundary attack下为什么不是0% 的acc,我对boundary attack的理解是其每一步移动都不会改变新网络对生成的样本的预测,我觉得这个攻击的意思是选择一个错误的起始点然后迭代只是为了缩小与原始样本的距离。当然我觉得一个攻击100%成功是挺扯淡的,但是这正是困扰我的地方。

期待您的回复!

朋友..对抗训练也是用nucnorm做的吗,这也太慢了...等我做完估计过年了都

train_adv.py中startp和endp缺省都是0.5,这是有意为之还是一个疏忽?为什么不像train_pure一样用mask_num分开?