[Query] Adversarial Attack Methods!
innat opened this issue · 1 comments
innat commented
It's an awesome library, thanks for creating. I've some beginner queries:
- The NSL provides APIs for adversarial training. But if I'm not mistaken, it doesn't provide adversarial attack methods, such as FGSM, PGD, etc. Will it ever support this feature? Some other popular library (cleverhans, art, etc) provides this stuff.
- Regarding the adversarial training, I couldn't make alignment with the repo's name? How does adversarial sense with neural structural learning term?
DualityGap commented
Thanks for your interest in NSL!
-
Yes, we do provide various adversarial attack methods (including FGSM, PGD, etc). See this file for more info:
https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/lib/adversarial_neighbor.py -
Connection between the "term" NSL and adversarial example: adversarial examples can be treated as "neighbors" of the original example with "edges" constructed dynamically. So in NSL repo, these adversarial examples are often referred as adversarial neighbors. Feel free to see here for more introductory info: https://www.tensorflow.org/neural_structured_learning