GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies. The current version focuses on the gradient inversion attack in the image classification task, which recovers private images from public gradients.
Recent research shows that sending gradients instead of data in Federated Learning can leak private information (see this growing list of attack paper). These attacks demonstrate that an adversary eavesdropping on a client’s communications (i.e. observing the global modelweights and client update) can accurately reconstruct a client’s private data using a class of techniques known as “gradient inversion attacks", which raise serious concerns about such privacy leakage.
To counter these attacks, researchers have proposed defense mechanisms (see this growing list of defense paper). We are developing this framework to evaluate different defense mechanisms against state-of-the-art attacks.
There are lots of reasons to use GradAttack:
-
😈 Evaluate the privacy risk of your Federated Learning pipeline by running on it various attacks supported by GradAttack
-
💊 Enhance the privacy of your Federated Learning pipeline by applying defenses supported by GradAttack in a plug-and-play fashion
-
🔧 Research and develop new gradient attacks and defenses by reusing the simple and extensible APIs in GradAttack
For help and realtime updates related to GradAttack, please join the GradAttack Slack!
You may install GradAttack directly from PyPi using pip
:
pip install gradattack
You can also install directly from the source for the latest features:
git clone https://github.com/Princeton-SysML/GradAttack
cd GradAttack
pip install -e .
To evaluate your model's privacy leakage against the gradient inversion attack, all you need to do is to:
- Define your deep learning pipeline
datamodule = CIFAR10DataModule()
model = create_lightning_module(
'ResNet18',
training_loss_metric=loss,
**hparams,
)
trainer = pl.Trainer(
gpus=devices,
check_val_every_n_epoch=1,
logger=logger,
max_epochs=args.n_epoch,
callbacks=[early_stop_callback],
)
pipeline = TrainingPipeline(model, datamodule, trainer)
- (Optional) Apply defenses to the pipeline
defense_pack = DefensePack(args, logger)
defense_pack.apply_defense(pipeline)
- Run training with the pipeline (see detailed example scripts and bashes in examples)
pipeline.run()
pipeline.test()
You may use the tensorboard logs to track your training and to compare results of different runs:
tensorboard --logdir PATH_TO_TRAIN_LOGS
- Run attack on the pipeline (see detailed example scripts and bashes in examples)
# Fetch a victim batch and define an attack instance
example_batch = pipeline.get_datamodule_batch()
batch_gradients, step_results = pipeline.model.get_batch_gradients(
example_batch, 0)
batch_inputs_transform, batch_targets_transform = step_results[
"transformed_batch"]
attack_instance = GradientReconstructor(
pipeline,
ground_truth_inputs=batch_inputs_transform,
ground_truth_gradients=batch_gradients,
ground_truth_labels=batch_targets_transform,
)
# Define the attack instance and launch the attack
attack_trainer = pl.Trainer(
max_epochs=10000,
)
attack_trainer.fit(attack_instance,)
You may use the tensorboard logs to track your attack and to compare results of different runs:
tensorboard --logdir PATH_TO_ATTACK_LOGS
- Evalute the attack results (see examples)
python examples/calc_metric.py --dir PATH_TO_ATTACK_RESULTS
GradAttack is currently in an "alpha" stage in which we are working to improve its capabilities and design.
Contributions are welcome! See the contributing guide for detailed instructions on how to contribute to our project.
If you want to use GradAttack for your research (much appreciated!), you can cite it as follows:
@inproceedings{huang2021evaluating,
title={Evaluating Gradient Inversion Attacks and Defenses in Federated Learning},
author={Huang, Yangsibo and Gupta, Samyak and Song, Zhao and Li, Kai and Arora, Sanjeev},
booktitle={NeurIPS},
year={2021}
}
This project is supported in part by Ma Huateng Foundation, Schmidt Foundation, NSF, Simons Foundation, ONR and DARPA/SRC. Yangsibo Huang and Samyak Gupta are supported in part by the Princeton Graduate Fellowship. We would like to thank Quanzheng Li, Xiaoxiao Li, Hongxu Yin and Aoxiao Zhong for helpful discussions, and members of Kai Li’s and Sanjeev Arora’s research groups for comments on early versions of this library.