/Adversarial-Label-Flip-Attack

Source Code for 'Adversarial Label Flips Attack on Support Vector Machines'

Primary LanguagePythonMIT LicenseMIT

Adversarial-Label-Flip-Attack

Implementation of the research paper Xiao, Han, Huang Xiao, and Claudia Eckert. "Adversarial Label Flips Attack on Support Vector Machines." ECAI. 2012.

  • Addresses the problem of label flips attack where an adversary contaminates the training set through flipping labels
  • Formulates an optimization framework for finding the label flips that maximize the classification error in Support Vector Machine