fgsm-attack
There are 24 repositories under fgsm-attack topic.
hammaad2002/ASRAdversarialAttacks
An ASR (Automatic Speech Recognition) adversarial attack repository.
fanghenshaometeor/vanilla-adversarial-training
vanilla training and adversarial training in PyTorch
SeungjaeLim/Machine_Learning_Security
Individual Study in Computer Architecture and Systems Laboratory (CASYS) with Prof.Jaehyuk Huh in 2021 Summer
deepmancer/adversarial-attacks-robustness
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Shreyasi2002/Adversarial_Attack_Defense
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
aaaastark/adversarial-network-attack-noise-on-mnist-dataset-pytorch
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
abhinav-bohra/Adversarial-Machine-Learning
Adversarial Sample Generation
ericyoc/adversarial-defense-cnn-poc
A classical or convolutional neural network model with adversarial defense protection
fiannac/NNAdversarialAttacks
Adversarial attacks on CNN using the FSGM technique.
Cyb5r4Gene/Attack-Analysis-of-Face-Recognition-Authentication-Systems-Using-Fast-Gradient-Sign-Method-FGSM-
This study was conducted in collaboration with the University of Prishtina (Kosovo) and the University of Oslo (Norway). This implementation is part of the paper entitled "Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method", published in the International Journal of Applied Artificial Intelligence by Taylor & Francis.
ezrc2/adversarial-attack
Adversarial attacks on a deep neural network trained on ImageNet
GeorgeMLP/adversarial-attacks
Implementations for several white-box and black-box attacks.
JuoTungChen/Adversarial_attacks_DCNN
This repository contains the implementation of three adversarial example attacks including FGSM, noise, semantic attack and a defensive distillation approach to defense against the FGSM attack.
pepealessio/Adversarial-Face-Identification
An University Project for the AI4Cybersecurity class.
rojinakashefi/Trustworthy-ML
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
ACM40960/project-21200461
Adversarial Attacks on Image data
arushi2509/Defense-Mechanisms-Against-Adversarial-Attacks-in-Computer-Vision-
Developed robust image classification models to prevent the effect of adversarial attacks
davidggz/steganalysis-adversarial-attacks
Adversarial attacks to SRNet
EkagraGupta/ForschungsArbeit
This project evaluates the robustness of image classification models against adversarial attacks using two key metrics: Adversarial Distance and CLEVER. The study employs variants of the WideResNet model, including a standard and a corruption-trained robust model, trained on the CIFAR-10 dataset. Key insights reveal that the CLEVER Score serves as
ericyoc/adversarial-defense-hnn-poc
A classical-quantum or hybrid neural network with adversarial defense protection
gautamHCSCV/Image-Anonymization-using-Adversarial-Attacks
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
giuliafazzi/adversarial-attacks
Notebook to implement different approaches for Adversarial Attack using Python and PyTorch.
Inpyo-Hong/Knowledge-distillation-vulnerability-of-DeiT-through-CNN-adversarial-attack
"Neural Computing and Applications" Published Paper (2023)
Gaurav7888/Adversarial-Attacks-and-Defence
Adversarial-Attacks-and-Defence