We propose and analyze two defenses against adversarial attacks:
- Latent code classification of a Variational Autoencoder
- Low-rank matrix factorization of the weights of the classifier
Code for the project "Making Neural Nets Robust Again"
Jupyter Notebook