/mlsec

PhD/MSc course on Machine Learning Security (Univ. Cagliari)

Primary LanguageJupyter Notebook

Machine Learning Security

Academic Year 2023-2024

The course will start on October 3, 2023. Teams link.

Instructors: Prof. Battista Biggio

Teaching Assistants: Dr. Maura Pintor, Dr. Ambra Demontis

External Seminars: Dr. Luca Demetrio, Prof. Fabio Roli

MSc in Computer Engineering, Cybersecurity and Artificial Intelligence (Univ. Cagliari)

National PhD Program in Artificial Intelligence

PhD Program in Electronic and Computer Engineering (Univ. Cagliari)

GitHub repository for course material: https://github.com/unica-mlsec/mlsec

Lectures

  • Tuesday, 15-18, room I_IB (ex BA), building I
  • Thursday, 12-14, I_IB (ex BA), building I

Course objectives and outcome

Objectives

The objective of this course is to provide students with the fundamental elements of machine learning security in the context of different application domains. The main concepts and methods of adversarial machine learning are presented, from threat modeling to attacks and defenses, as well as basic methods to properly evaluate adversarial robustness of a machine learning model against different attacks.

Outcome

An understanding of fundamental concepts and methods of machine learning security and its applications. An ability to analyse and evaluate attacks and defenses in the context of application-specific domains. An ability to design and evaluate robust machine learning models with Python and test them on benchmark data sets.

Course materials

  1. Introduction to the course
  2. Threat modeling and attacks on AI/ML models
  3. Evasion Attacks
  4. Adversarial Windows Malware (Adversarial EXEmples) - Guest Lecture by Dr. Luca Demetrio
  5. From Known Knowns to Unknown Unknowns and Trustworthy AI - Guest Lecture by Prof. Fabio Roli
  6. Poisoning Attacks and Defenses
  7. Privacy Attacks and Defenses
  8. Explainable AI/ML

Papers for the reading group exercise

  1. C. Szegedy et al., Intriguing properties of neural networks, ICLR 2014.
  2. B. Biggio et al., Evasion Attacks against Machine Learning at Test Time, ECML PKDD 2013.
  3. A. Athalye et al., Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, ICML 2018.
  4. F. Croce and M. Hein, Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks, ICML 2020.
  5. F. Croce et al., Evaluating the Adversarial Robustness of Adaptive Test-time Defenses, ICML 2022.
  6. C. Yao et al., Automated Discovery of Adaptive Attacks on Adversarial Defenses, NeurIPS 2021.
  7. B. Biggio et al., Poisoning Attacks against Support Vector Machines, ICML 2012.
  8. A. Shafahi et al., Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks, NeurIPS 2018.
  9. T. Gu et al., BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, NIPS-WS 2017.
  10. R. Shokri et al., Membership Inference Attacks against Machine Learning Models, IEEE Symp. S&P 2017.
  11. F. Tramer et al., Stealing Machine Learning Models via Prediction APIs, USENIX Sec. 2016.