fgsm
There are 54 repositories under fgsm topic.
advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
thu-ml/ares
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
sarathknv/adversarial-examples-pytorch
Implementation of Papers on Adversarial Examples
Mrzhouqifei/DBA
Detection by Attack: Detecting Adversarial Samples by Undercover Attack
jsikyoon/adv_attack_capsnet
Tensorflow Implementation of Adversarial Attack to Capsule Networks
wanglouis49/pytorch-adversarial_box
PyTorch library for adversarial attack and training
as791/Adversarial-Example-Attack-and-Defense
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
poloclub/jpeg-defense
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Jeffkang-94/pytorch-adversarial-attack
Implementation of gradient-based adversarial attack(FGSM,MI-FGSM,PGD)
edosedgar/mtcnnattack
The first real-world adversarial attack on MTCNN face detetction system to date
AlbertMillan/adversarial-training-pytorch
Implementation of adversarial training under fast-gradient sign method (FGSM), projected gradient descent (PGD) and CW using Wide-ResNet-28-10 on cifar-10. Sample code is re-usable despite changing the model or dataset.
kaiyoo/ML-Anomaly-Detection
Detection of network traffic anomalies using unsupervised machine learning
lepoeme20/Adversarial-Attacks
Reproduce multiple adversarial attack methods
AgentMaker/Paddle-Adversarial-Toolbox
Paddle-Adversarial-Toolbox (PAT) is a Python library for Deep Learning Security based on PaddlePaddle.
mjDelta/kervolution-under-adversarial-attack-pytorch
implement Kervolutional Neural Networks (CVPR, 2019) and compare with CNN under the white box attack
messi84/Multiple-Adversarial_Examples_attack
六代兴亡如梦,苒苒惊时月。纵使岁寒途远,此志应难夺。
Mayukhdeb/deep-chicken-saviour
using adversarial attacks to confuse deep-chicken-terminator :shield: :chicken:
tarun360/Adversarial-Attack-on-3D-U-Net-model-Brain-Tumour-Segmentation.
Adversarial Attack on 3D U-Net model: Brain Tumour Segmentation.
pradeep-pyro/FGS
Fast Gradient Sign Method for Adversarial Attack (PyTorch)
n0mi1k/perturbify
A Tensorflow adversarial machine learning attack toolkit to add perturbations and cause image recognition models to misclassify an image
Ksuryateja/AdvExGAN
Adversarial Attack using a DCGAN
nikhitmago/adversarial-attacks-cnn
Implementing white box adversarial attacks on parameters and architecture of CNN in PyTorch
aminul-huq/WideResNet_MNIST_Adversarial_Training
WideResNet implementation on MNIST dataset. FGSM and PGD adversarial attacks on standard training, PGD adversarial training, and Feature Scattering adversarial training.
LawrenceMMStewart/Segat
FGSM attack Pytorch module for semantic segmentation networks, with examples provided for Deeplab V3.
sohailahmedkhan/Adversarial-Attack-on-Fine-Tuned-Flood-Detection-Model
Implementation of FGSM (Fast Gradient Sign Method) attack on fine-tuned MobileNet architecture trained for flood detection in images.
vinuthags/adversarial_attack
Adversarial Attacks on MNIST
yahi61006/adversarial-attack-on-mtcnn
adversarial patch train by I-FGSM to attack on MTCNN face detection system
bliutech/adversarial-networks
ECE C147: Neural Networks & Deep Learning. Repository for "Developing Robust Networks to Defend Against Adversarial Examples". Implementing adversarial data augmentation on CNNs and RNNs.
csgwon/pytorch-simple-examples
Simple example notebooks using PyTorch
makhanov/CNN_pytorch_adversarial_attack_Fashion_MNIST
Repository consists of pre-trained CNN model in pytorch, hitting 89% on Fashion MNIST dataset. Adversarial attack was implemented on a given model. Results are below.
selous123/libadver
Package for adversarial attack in pytorch
sisinflab/MSAP
In this work, we extend the FGSM method proposing multistep adversarial perturbation (MSAP) procedures to study the recommenders’ robustness under powerful methods. Letting fixed the perturbation magnitude, we illustrate that MSAP is much more harmful than FGSM in corrupting the recommendation performance of BPR-MF.
Y3NH0/FGSM_adversarial_attack_on_malware_detection
adversarial attack on malware detector