/federated-learning

Federated Learning - PyTorch

Primary LanguagePythonMIT LicenseMIT

Federated Learning

This is partly the reproduction of the paper of Communication-Efficient Learning of Deep Networks from Decentralized Data
Only experiments on MNIST and CIFAR10 (both IID and non-IID) is produced by far.

Note: The scripts will be slow without the implementation of parallel computing.

Requirements

python>=3.6
pytorch>=0.4

Run

The MLP and CNN models are produced by:

python main_nn.py

Federated learning with MLP and CNN is produced by:

python main_fed.py

See the arguments in options.py.

For example:

python main_fed.py --dataset mnist --iid --num_channels 1 --model cnn --epochs 50 --gpu 0

NB: for CIFAR-10, num_channels must be 3.

Results

MNIST

Results are shown in Table 1 and Table 2, with the parameters C=0.1, B=10, E=5.

Table 1. results of 10 epochs training with the learning rate of 0.01

Model Acc. of IID Acc. of Non-IID
FedAVG-MLP 94.57% 70.44%
FedAVG-CNN 96.59% 77.72%

Table 2. results of 50 epochs training with the learning rate of 0.01

Model Acc. of IID Acc. of Non-IID
FedAVG-MLP 97.21% 93.03%
FedAVG-CNN 98.60% 93.81%

Ackonwledgements

Acknowledgements give to youkaichao.

References

McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Artificial Intelligence and Statistics (AISTATS), 2017.

Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, and Zi Huang. Learning private neural language modeling with attentive aggregation. In the 2019 International Joint Conference on Neural Networks (IJCNN), 2019. [Paper] [Code]

Jing Jiang, Shaoxiong Ji, and Guodong Long. Decentralized knowledge acquisition for mobile internet applications. World Wide Web, 2020. [Paper]