adversarial-machine-learning
There are 532 repositories under adversarial-machine-learning topic.
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Shawn-Shan/fawkes
Fawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
protectai/llm-guard
The Security Toolkit for LLM Interactions
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
jiep/offensive-ai-compilation
A curated list of useful resources that cover Offensive AI.
safe-graph/graph-adversarial-learning-literature
A curated list of adversarial attacks and defenses papers on graph-structured data.
RobustBench/robustbench
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
akanimax/T2F
T2F: text to face generation using Deep Learning
chawins/llm-sp
Papers and resources related to the security and privacy of LLMs 🤖
akanimax/pro_gan_pytorch
Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
thu-ml/ares
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
EdisonLeeeee/GraphGallery
GraphGallery is a gallery for benchmarking Graph Neural Networks
deadbits/vigil-llm
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Koukyosyumei/AIJack
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
Trustworthy-AI-Group/TransferAttack
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
locuslab/smoothing
Provable adversarial robustness at ImageNet scale
MinghuiChen43/awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
ebagdasa/backdoors101
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Verified-Intelligence/auto_LiRPA
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
hbaniecki/adversarial-explainable-ai
💡 Adversarial attacks on explanations and how to defend them
pralab/secml_malware
Create adversarial attacks against machine learning Windows malware detectors
Trustworthy-AI-Group/Adversarial_Examples_Papers
A list of recent papers about adversarial learning
Hadisalman/smoothing-adversarial
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
tao-bai/attack-and-defense-methods
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
AvalZ/WAF-A-MoLE
A guided mutation-based fuzzer for ML-based Web Application Firewalls
pralab/secml
A Python library for Secure and Explainable Machine Learning
alexdevassy/Machine_Learning_CTF_Challenges
CTF challenges designed and implemented in machine learning applications
ashafahi/free_adv_train
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
milaan9/Deep_Learning_Algorithms_from_Scratch
This repository explores the variety of techniques and algorithms commonly used in deep learning and the implementation in MATLAB and PYTHON
brysef/rfml
Radio Frequency Machine Learning with PyTorch
shangtse/robust-physical-attack
Physical adversarial attack for fooling the Faster R-CNN object detector
ZhengyuZhao/AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
sisinflab/adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-dimensional) data distributions. In this survey, we provide an exhaustive literature review of 74 articles published in major RS and ML journals and conferences. This review serves as a reference for the RS community, working on the security of RS or on generative models using GANs to improve their quality.
akanimax/msg-gan-v1
MSG-GAN: Multi-Scale Gradients GAN (Architecture inspired from ProGAN but doesn't use layer-wise growing)