/FedLab

A flexible Federated Learning Framework based on PyTorch, simplifying your Federated Learning research.

Primary LanguagePythonApache License 2.0Apache-2.0

FedLab: A Flexible Federated Learning Framework

GH Actions Tests Documentation Status License codecov arXiv Pyversions

Read this in other languages: English, 简体中文.

Federated learning (FL), proposed by Google at the very beginning, is recently a burgeoning research area of machine learning, which aims to protect individual data privacy in distributed machine learning process, especially in finance, smart healthcare and edge computing. Different from traditional data-centered distributed machine learning, participants in FL setting utilize localized data to train local model, then leverages specific strategies with other participants to acquire the final model collaboratively, avoiding direct data sharing behavior.

To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce highly customizable framework FedLab in this work. FedLab provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. Users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmark implemented in FedLab are also presented.

Benchmarks

  1. Optimization Algorithms
  1. Compression Algorithms
  1. Datasets

More FedLab version of FL algorithms are coming soon. For more information, please star our FedLab Benchmark repository.

Awesomes

Tutorials

Survey

  • [ICLR-DPML 2021] FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks [Paper] [Code]
  • [arXiv 2021] Federated Graph Learning -- A Position Paper [Paper]
  • [IEEE TKDE 2021] A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection [Paper]
  • [arXiv 2021] A Survey of Fairness-Aware Federated Learning [Paper]
  • [Foundations and Trends in Machine Learning 2021] Advances and Open Problems in Federated Learning [Paper]
  • [arXiv 2020] Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective [Paper]
  • [IEEE Signal Processing Magazine 2020] Federated Learning: Challenges, Methods, and Future Directions [Paper]
  • [IEEE Communications Surveys & Tutorials 2020] Federated Learning in Mobile Edge Networks A Comprehensive Survey [Paper]
  • [IEEE TIST 2019] Federated Machine Learning: Concept and Applications [Paper]

Frameworks

Benchmarks

FL + Semi-supervised Learning

  • [ICLR 2021] Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning [Paper] [Code]
  • [arXiv 2021] SemiFL: Communication Efficient Semi-Supervised Federated Learning with Unlabeled Clients [Paper]
  • [IEEE BigData 2021] Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models [Paper]
  • [arXiv 2020] Benchmarking Semi-supervised Federated Learning [Paper]] [Code]

FL + HPC

  • [arXiv 2022] Sky Computing: Accelerating Geo-distributed Computing in Federated Learning [Paper] [Code]
  • [ACM HPDC 2020] TiFL: A Tier-based Federated Learning System [Paper] [Video]

Awesome List

Contribution

You're welcome to contribute to this project through Pull Request.

  • By contributing, you agree that your contributions will be licensed under Apache License, Version 2.0
  • Docstring and code should follow Google Python Style Guide: 中文版|English
  • The code should provide test cases using unittest.TestCase

Citation

Please cite FedLab in your publications if it helps your research:

@article{smile2021fedlab,  
    title={FedLab: A Flexible Federated Learning Framework},  
    author={Dun Zeng, Siqi Liang, Xiangjing Hu and Zenglin Xu},  
    journal={arXiv preprint arXiv:2107.11621},  
    year={2021}
}

Contact

Project Investigator: Prof. Zenglin Xu (xuzenglin@hit.edu.cn).

For technical issues reated to FedLab development, please contact our development team through Github issues or email: