adversarial-machine-learning
There are 449 repositories under adversarial-machine-learning topic.
Shawn-Shan/fawkes
Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
jiep/offensive-ai-compilation
A curated list of useful resources that cover Offensive AI.
protectai/llm-guard
The Security Toolkit for LLM Interactions
safe-graph/graph-adversarial-learning-literature
A curated list of adversarial attacks and defenses papers on graph-structured data.
RobustBench/robustbench
RobustBench: a standardized adversarial robustness benchmark [NeurIPS'21 Benchmarks and Datasets Track]
akanimax/T2F
T2F: text to face generation using Deep Learning
akanimax/pro_gan_pytorch
Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
thu-ml/ares
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
EdisonLeeeee/GraphGallery
GraphGallery is a gallery for benchmarking Graph Neural Networks, From InplusLab.
locuslab/smoothing
Provable adversarial robustness at ImageNet scale
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
hbaniecki/adversarial-explainable-ai
💡 Adversarial attacks on explanations and how to defend them
Verified-Intelligence/auto_LiRPA
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
chawins/llm-sp
Papers and resources related to the security and privacy of LLMs 🤖
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Hadisalman/smoothing-adversarial
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
tao-bai/attack-and-defense-methods
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
pralab/secml_malware
Create adversarial attacks against machine learning Windows malware detectors
ashafahi/free_adv_train
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
milaan9/Deep_Learning_Algorithms_from_Scratch
This repository explores the variety of techniques and algorithms commonly used in deep learning and the implementation in MATLAB and PYTHON
AvalZ/WAF-A-MoLE
A guided mutation-based fuzzer for ML-based Web Application Firewalls
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
sisinflab/adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 74 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.
shangtse/robust-physical-attack
Physical adversarial attack for fooling the Faster R-CNN object detector
akanimax/msg-gan-v1
MSG-GAN: Multi-Scale Gradients GAN (Architecture inspired from ProGAN but doesn't use layer-wise growing)
Trustworthy-AI-Group/TransferAttack
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
pralab/secml
A Python library for Secure and Explainable Machine Learning
spring-epfl/mia
A library for running membership inference attacks against ML models
EdisonLeeeee/RS-Adversarial-Learning
A curated collection of adversarial attack and defense on recommender systems.
brysef/rfml
Radio Frequency Machine Learning with PyTorch
ZhengyuZhao/AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy