This is a simple implementation of Federated Learning (FL) with Differential Privacy (DP). The bare FL model (without DP) is the reproduction of the paper Communication-Efficient Learning of Deep Networks from Decentralized Data. Each client apply DP mechanism locally to perturb trained parameters before uploading to the parameter server. The gaussian noises are generated according to moments accountant technique [2] using tensorflow-privacy.
- torch 1.7.1
- tensorflow-privacy 0.5.1
- numpy 1.16.2
FLModel.py: definition of the FL client and FL server class
MLModel.py: CNN model for MNIST and FEMNIST datasets
utils.py: sample MNIST in a non-i.i.d. manner
- Download MNIST dataset
- Install tensorflow-privacy
- Set parameters in test.py/test.ipynb
- Run
python test.py
or Execute test.ipynb to train model on MNIST dataset
# code segment in test.py/test.ipynb
lr = 0.1
fl_param = {
'output_size': 10, # number of units in output layer
'client_num': client_num, # number of clients
'model': MnistCNN, # model
'data': d, # dataset
'lr': lr, # learning rate
'E': 100, # number of local iterations
'eps': 8.0, # privacy budget for each global communication
'delta': 1e-5, # approximate differential privacy: (epsilon, delta)-DP
'q': 0.01, # sampling rate
'clip': 8, # clipping norm
'tot_T': 5, # number of aggregation times (communication rounds)
'batch_size': 128,
'device': device
}
Note that 'eps' is the privacy budget for each global communication You may use composition theorems to compute the total privacy budget
[1] McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proc. Artificial Intelligence and Statistics (AISTATS), 2017.
[2] Abadi, Martin, et al. Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016.
[3] TensorFlow Privacy: https://github.com/tensorflow/privacy