/NDSS21-Model-Poisoning

Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"

Primary LanguageJupyter Notebook

A General Framework to Evaluate Robustness of Aggregation Algorithms in Federated Learning

by Virat Shejwalkar and Amir Houmansadr published at ISOC Network and Distributed Systems Security Symposium, (NDSS) 2021

Motivation

Result Highlights

Understanding the code and using the notebooks

We have given the code in the form of notebooks which are self-explanatory in that the description of each cell is given in the respective notebooks. To run the code, please clone/download the repo and simply start running the notebooks in usual manner. Various evaluation dimensions are below

  • Datasets included are CIFAR10 (covers iid and cross-silo FL cases) and FEMNIST (covers non-iid and cross-device FL cases).
  • We have given codes for five state-of-the-art aggregation algorithms, which give theoretical convergence guarantees: Krum, Multi-krum, Bulyan, Trimmed-mean, Median
  • Baseline model poisoning attacks Fang and LIE.
  • Our state-of-the-art model poisoning attacks, Aggregation-tailored attacks and Aggregation-agnsotic attacks, for the above mentioned aggregation algorithms. For any other aggregation algorithms, the code allows for simple plug-and-attack framework.

Requirements