-
FedAvg -- Communication-Efficient Learning of Deep Networks from Decentralized Data (AISTATS 2017)
-
FedAvgM -- Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification (ArXiv)
-
FedProx -- Federated Optimization in Heterogeneous Networks (MLSys 2020)
-
SCAFFOLD -- SCAFFOLD: Stochastic Controlled Averaging for Federated Learning (ICML 2020)
-
FedDyn -- Federated Learning Based on Dynamic Regularization (ICLR 2021)
-
FedLC -- Federated Learning with Label Distribution Skew via Logits Calibration (ICML 2022)
-
Local -- Local training only (without communication).
-
FedBN -- FedBN: Federated Learning On Non-IID Features Via Local Batch Normalization (ICLR 2021)
-
FedPer -- Federated Learning with Personalization Layers (AISTATS 2020)
-
FedRep -- Exploiting Shared Representations for Personalized Federated Learning (ICML 2021)
-
Per-FedAvg -- Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach (NIPS 2020)
-
pFedMe -- Personalized Federated Learning with Moreau Envelopes (NIPS 2020)
-
pFedHN -- Personalized Federated Learning using Hypernetworks (ICML 2021)
-
pFedLA -- Layer-Wised Model Aggregation for Personalized Federated Learning (CVPR 2022)
-
FedFomo -- Personalized Federated Learning with First Order Model Optimization (ICLR 2021)
-
FedBabu -- FedBabu: Towards Enhanced Representation for Federated Image Classification (ICLR 2022)
More reproductions/features would come soon or later (depends on my mood 🤣).
So easy, right? 😎
cd data/utils
python run.py -d cifar10 -a 0.1 -cn 100
cd ../../
cd src/server
python ${algo}.py
About methods of generating federated dastaset, go check data/utils/README.md
for full details.
- Run
python -m visdom.server
on terminal. - Go check
localhost:8097
on your browser.
🤗 This benchmark only support algorithms to solve image classification problem for now.
Regular image datasets
-
MNIST (1 x 28 x 28, 10 classes)
-
CIFAR-10/100 (3 x 32 x 32, 10/100 classes)
-
EMNIST (1 x 28 x 28, 62 classes)
-
FashionMNIST (1 x 28 x 28, 10 classes)
-
FEMNIST (1 x 28 x 28, 62 classes)
-
CelebA (3 x 218 x 178, 2 classes)
-
SVHN (3 x 32 x 32, 10 classes)
-
USPS (1 x 16 x 16, 10 classes)
-
Tiny-Imagenet-200 (3 x 64 x 64, 200 classes)
Medical image datasets
-
COVID-19 (3 x 244 x 224, 4 classes)
-
Organ-S/A/CMNIST (1 x 28 x 28, 11 classes)
Some reproductions in this benchmark are referred to https://github.com/TsingZ0/PFL-Non-IID, which is a great FL benchmark. 👍