/adversarial-ml-demo

This project investigates the vulnerability of a deep learning-based Traffic Sign Classifier to the Fast Gradient Sign Method (FGSM) adversarial attack, showcasing the susceptibility of such models to adversarial examples and emphasizing the need for robustness.

Primary LanguagePureBasicMIT LicenseMIT

Stargazers