The codebase is an academic research prototype, and meant to elucidate protocol details and for proofs-of-concept, and benchmarking. It is not meant for deployment currently.
This repository contains the evaluation code for the following manuscripts.
- Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees. Banghua Zhu*, Lun Wang*, Qi Pang*, Shuai Wang, Jiantao Jiao, Dawn Song, Michael Jordan.
- Towards Bidirectional Protection in Federated Learning. Lun Wang*, Qi Pang*, Shuai Wang, Dawn Song. SpicyFL Workshop @ NeurIPS 2020.
We implemented the following attacks in federated learning.
We implemented the following Byzantine-robust aggregators in federated learning.
- Bucketing-filtering
- Bucketing-no-regret
- Bulyan Krum
- Bulyan Median
- Bulyan Trimmed Mean
- Filtering
- GAN
- Krum
- Median
- No-regret
- Trimmed Mean
- Bucketing
- Learning from History
- Clustering
- conda 4.12.0
- Python 3.7.11
- Screen version 4.06.02 (GNU) 23-Oct-17
First, create a conda virtual environment with Python 3.7.11 and activate the environment.
conda create -n venv python=3.7.11
conda activate venv
Run the following command to install all the required python packages.
pip install -r requirements.txt
Reproduce the evaluation results by running the following script. You might want to change the GPU index in the script manually. The current script distributes training tasks to 8 Nvidia GPUs indexed by 0-7.
./train.sh
To run a single Byzantine-robust aggregator against a single attack on a dataset, run the following command with the right system arguments:
python src/simulate.py --dataset='dataset' --attack='attack' --agg='aggregator'
For DBA attack, we reuse its official implementation. First open a terminal and run the following command to start Visdom monitor:
python -m visdom.server -p 8097
Then start the training with selected aggregator and attack, which are specified in utils/X.yaml
, X
can be mnist_params
or fashion_params
.
cd ./src/DBA
python main.py --params utils/X.yaml
For GAN aggregator, run the following command to start training in round X
:
python src/simulate_gan.py --current_round=X --attack='noattack' --dataset='MNIST'
python src/gan.py --next_round=X+1 --gan_lr=1e-5
Note that X
starts from 0
, and you may try different hyper-parameters like learning rate in gan.py
if you use datasets other than MNIST
or attacks other than trimmedmean
and noattack
.
If you find our work useful in your research, please consider citing:
@inproceedings{zhu2022byzantine,
title={Byzantine-Robust Federated Learning with Optimal Statistical Rates and Privacy Guarantees},
author={Banghua Zhu and Lun Wang and Qi Pang and Shuai Wang and Jiantao Jiao and Dawn Song and Michael Jordan},
year={2022},
url={https://arxiv.org/abs/2205.11765}
}
@article{wang2020f,
title={F2ED-LEARNING: Good fences make good neighbors},
author={Lun Wang and Qi Pang and Shuai Wang and Dawn Song},
journal={CoRR},
year={2020},
url={http://128.1.38.43/wp-content/uploads/2020/12/Lun-Wang-07-paper-Lun.pdf}
}
The code of evaluation on DBA attacks largely reuse the original implementation from the authors of DBA.