moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.
The name moorkh is a Hindi word meaning Fool in English, that's what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.
pip install moorkh
orgit clone https://github.com/akshay-gupta123/moorkh
import moorkh
norm_layer = moorkh.Normalize(mean,std)
model = nn.Sequential(
norm_layer,
model
)
model.eval()
attak = moorkh.FGSM(model)
adversarial_images = attack(images, labels)
EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES: FGSM
ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD: IFGSM
ON THE LIMITATION OF CONVULATIONSAL NEURAL NETWORK IN RECOGNIZING NEGATIVE IMAGES: Semantic
ADDING NOISE: Noise
TOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS: PGD\L2
ESEMBLE ADVERSAIAL TRAINING: ATTACKS and DEFENSE: RFGSM
- Adding more Attacks
- Making Documentation
- Adding demo notebooks
- Adding Summaries of Implemented papers(for my own undestanding)
This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!
- Adversarial=Robustness-Toolbox by IBM.
- Foolbox by Bethgelab.
- Cleverhans by Google brain
- Reliable and Interpretable Artificial Intelligence A Eth Zurich course
- Adversarial Robustness - Theory and Practice Tutorial by Zico Kolter and Aleksander Madry