/moorkh

A Python toolbox to create adversarial examples that fool neural networks in PyTorch.

Primary LanguagePythonMIT LicenseMIT

moorkh : Adversarial Attacks in Pytorch

moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.

About the name

The name moorkh is a Hindi word meaning Fool in English, that's what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.

Usage

Installation

  • pip install moorkh or
  • git clone https://github.com/akshay-gupta123/moorkh
import moorkh
norm_layer = moorkh.Normalize(mean,std)
model = nn.Sequential(
    norm_layer,
    model
)
model.eval()
attak = moorkh.FGSM(model)
adversarial_images = attack(images, labels)

Implemented Attacks

To-Do's

  • Adding more Attacks
  • Making Documentation
  • Adding demo notebooks
  • Adding Summaries of Implemented papers(for my own undestanding)

Contribution

This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!

References