Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
PythonMIT
Issues
- 3
- 1
- 0
- 0
- 0
- 8
- 3
Implement semantic adversarial attacks
#2126 opened - 2
- 0
- 0
- 3
Bugs in AutoAttack implementation
#2107 opened - 1
Bug art.exceptions.EstimatorError
#2105 opened - 0
Loss Weighting Overridden in IBP Training
#2102 opened - 0
Incorrect image format for default test subsets
#2101 opened - 1
- 3
- 2
ZooAttack issues
#2094 opened - 0
Incorrect YOLO Bounding Box Input Format
#2088 opened - 1
Questions for robust decision tree.
#2087 opened - 3
Error in pytorch_yolo.py
#2086 opened - 1
- 1
- 0
- 0
- 0
- 0
- 0
- 1
- 0
- 0
- 1
- 1
Audio perturbation code should cache the trigger
#2052 opened - 1
Adversarial attack on decision trees?
#2048 opened - 1
- 2
bug in pytorch_deep_speech
#2043 opened - 10
apply MIFace attack on different datasets
#2042 opened - 0
- 0
Implementation of certified training via IBP
#2037 opened - 0
TRADES adversarial training implementation
#2031 opened - 1
- 3
adversarial_training_FBF.py example Error.
#2014 opened - 1
- 1
- 3
Audio perturbations go out of range
#2002 opened - 0
Missing Object Detection Estimator Types
#1998 opened - 1
YOLO Object Detection Estimator for TensorFlow
#1996 opened - 0
Data Augmentation Defenses `apply_fit` and `apply_predict` Default Parameters are Swapped.
#1986 opened - 0
DP-InstaHide DoubleTensor Type Error for PyTorch
#1985 opened - 0
- 12