Read this in other languages: English, 简体中文.
Federated learning (FL), proposed by Google at the very beginning, is recently a burgeoning research area of machine learning, which aims to protect individual data privacy in distributed machine learning process, especially in finance, smart healthcare and edge computing. Different from traditional data-centered distributed machine learning, participants in FL setting utilize localized data to train local model, then leverages specific strategies with other participants to acquire the final model collaboratively, avoiding direct data sharing behavior.
To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce highly customizable framework FedLab in this work. FedLab provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. Users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmark implemented in FedLab are also presented.
- Documentation English version|中文版
- Overview of FedLab
- Installation & Setup
- Examples
- Contribute Guideline
- API Reference
New FedLab (v1.2.1) provides fully finished communication pattern. We futher simplified the APIs of NetworkManager part and re-organised the APIs of trainer. Currently, three basic scenes (Standalone, Cross-process and Hierachical) are supported by choosing different client trainer. Please see our demos.
Thanks to our contributors, algorithms and benchmarks are provided in our FedLab-Benchmarks repo. More FedLab version of FL algorithms are coming.
- Optimization Algorithms
- FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data
- FedAsync: Asynchronous Federated Optimization
- FedProx: Federated Optimization in Heterogeneous Networks
- FedDyn: Federated Learning based on Dynamic Regularization
- Personalized-FedAvg: Improving Federated Learning Personalization via Model Agnostic Meta Learning
- qFFL: Fair Resource Allocation in Federated Learning
- Compression Algorithms
- DGC: Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
- Datasets
- LEAF: A Benchmark for Federated Settings
- NIID-Bench: Federated Learning on Non-IID Data Silos: An Experimental Study
We will keep collecting FL resources and provide a repo for FL beginners and researchers: Dive-into-Federated-Learning.
You're welcome to contribute to this project through Pull Request.
- By contributing, you agree that your contributions will be licensed under Apache License, Version 2.0
- Docstring and code should follow Google Python Style Guide: 中文版|English
- The code should provide test cases using
unittest.TestCase
Please cite FedLab in your publications if it helps your research:
@article{smile2021fedlab,
title={FedLab: A Flexible Federated Learning Framework},
author={Dun Zeng, Siqi Liang, Xiangjing Hu and Zenglin Xu},
journal={arXiv preprint arXiv:2107.11621},
year={2021}
}
Project Investigator: Prof. Zenglin Xu (xuzenglin@hit.edu.cn).
For technical issues reated to FedLab development, please contact our development team through Github issues or email:
- Dun Zeng: zengdun@foxmail.com
- Siqi Liang: zszxlsq@gmail.com