Pinned Repositories
cifar10_challenge
A challenge to explore adversarial robustness of neural networks on CIFAR10.
convex_adversarial
A method for training neural networks that are provably robust to adversarial attacks.
dtsip.github.io
Guided-Denoise
The winning submission for NIPS 2017: Defense Against Adversarial Attack of team TSAIL
in-context-learning
pixel-deflection
Deflecting Adversarial Attacks with Pixel Deflection
cifar10_challenge
A challenge to explore adversarial robustness of neural networks on CIFAR10.
mnist_challenge
A challenge to explore adversarial robustness of neural networks on MNIST.
robust_representations
Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"
helm
Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models (CRFM) at Stanford for holistic, reproducible and transparent evaluation of foundation models, including large language models (LLMs) and multimodal models.
dtsip's Repositories
dtsip/in-context-learning
dtsip/convex_adversarial
A method for training neural networks that are provably robust to adversarial attacks.
dtsip/Guided-Denoise
The winning submission for NIPS 2017: Defense Against Adversarial Attack of team TSAIL
dtsip/cifar10_challenge
A challenge to explore adversarial robustness of neural networks on CIFAR10.
dtsip/dtsip.github.io
dtsip/pixel-deflection
Deflecting Adversarial Attacks with Pixel Deflection