L¹ `FGM` is wrong + extend to all p >= 1
Closed this issue · 5 comments
Hello,
I'm not sure, but I think the FGM
extension to
From what I can read here, it seems to me that the current version implements (essentially)
when
gives a higher inner product
Indeed, in both cases
Edit: See here for generalization to all
Hi @ego-thales Thank you for this comment. Without deciding on the correctness yet, how did you notice this issue? Have you already checked which version the literature on FGM is using?
Thanks for your answer,
I've stumbled upon this because while reading FGSM paper (the reference for implementation), I thought about generalizing to
Actually, now that I think about it, I don't see any reason why this attack is not generalized to any
Let
one gets:
-
$\Vert\text{noise direction}\Vert_p=1$ , -
$\langle \nabla, \text{noise direction}\rangle=\Vert\nabla\Vert_q$ (I skip the quick computation, but mainly because$\frac{q}{p}+1=q$ ), which is the equality case of Hölder's inequality and as such, optimal.
As such, it would be a nice addition to entirely generalize FGM
to all
Hi @ego-thales Thank you very much for the explanation and pull request! Let me take a closer look at the required changes. Related to this issue in FGSM, what do you think about the perturbation per iteration and overall perturbation calculation for p=1 in the Projected Gradient Descent attacks in art.attacks.evasion.projected_gradient_descent.*
?
I'm not entirely sure but it looks to me after a quick glance that PGD was implemented as a subs class of FGSM and inherits its loss from it.