FedLab-benchmarks
This repo contains standard FL algorithm implementations and FL benchmarks using FedLab.
Currently, following algorithms or benchrmarks are availableļ¼
Optimization Algorithms
- FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data
- FedAsync: Asynchronous Federated Optimization
- FedProx: Federated Optimization in Heterogeneous Networks
- FedDyn: Federated Learning based on Dynamic Regularization
- Personalized-FedAvg: Improving Federated Learning Personalization via Model Agnostic Meta Learning
Compression Algorithms
- DGC: Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
Datasets
- LEAF: A Benchmark for Federated Settings
- NIID-Bench: Federated Learning on Non-IID Data Silos: An Experimental Study
Working list
- PFL: Debiasing Model Updates for Improving Personalized Federated Training
- qFFL: Fair Resource Allocation in Federated Learning
- FedMGDA+: Federated Learning meets Multi-objective Optimization
We highly welcome you to contribute federated learning algorithm based on FedLab. If you encounter any problems, do not hesitate to submit an issue or send an email to repo maintainers.