Pinned Repositories
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference
backgrounds_challenge
foolbox
Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, Keras, …
hold-me-tight
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"
imagenet-r
ImageNet-R(endition) and DeepAugment (ICCV 2021)
neural-anisotropy-directions
Source code for "Neural Anisotropy Directions"
PRIME-augmentations
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
pytorch-cifar
95.47% on CIFAR10 with PyTorch
SparseFool
adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
amodas's Repositories
amodas/PRIME-augmentations
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
amodas/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference
amodas/backgrounds_challenge
amodas/foolbox
Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, Keras, …
amodas/hold-me-tight
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"
amodas/imagenet-r
ImageNet-R(endition) and DeepAugment (ICCV 2021)
amodas/neural-anisotropy-directions
Source code for "Neural Anisotropy Directions"
amodas/pytorch-cifar
95.47% on CIFAR10 with PyTorch
amodas/SparseFool