fedprox

There are 11 repositories under fedprox topic.

  • vaseline555/Federated-Learning-in-PyTorch

    Handy PyTorch implementation of Federated Learning (for your painless research)

    Language:Python42921992
  • ki-ljl/FedProx-PyTorch

    PyTorch implementation of FedProx (Federated Optimization for Heterogeneous Networks, MLSys 2020).

    Language:Python1062218
  • Lee-Gihun/FedNTD

    (NeurIPS 2022) Official Implementation of "Preservation of the Global Knowledge by Not-True Distillation in Federated Learning"

    Language:Python842414
  • c-gabri/Federated-Learning-PyTorch

    PyTorch implementation of Federated Learning algorithms FedSGD, FedAvg, FedAvgM, FedIR, FedVC, FedProx and standard SGD, applied to visual classification. Client distributions are synthesized with arbitrary non-identicalness and imbalance (Dirichlet priors). Client systems can be arbitrarily heterogeneous. Several mobile-friendly models are provided

    Language:Python762215
  • ayushm-agrawal/Federated-Learning-Implementations

    This repository contains all the implementation of different papers on Federated Learning

    Language:Jupyter Notebook43305
  • ysyisyourbrother/Federated-Learning-Research

    An implementation of federated learning research baseline methods based on FedML-core, which can be deployed on real distributed cluster and help researchers to explore more problems existing in real FL systems.

    Language:Python24227
  • shyam671/Federated-Learning

    Language:Jupyter Notebook21126
  • anandcu3/Federated-Learning-for-Remote-Sensing

    Federated Learning Experiments for Remote Sensing image data using convolution neural networks

    Language:Jupyter Notebook13201
  • Lee-Gihun/FedSOL

    (CVPR 2024) Official Implementation of "FedSOL: Stabilized Orthogonal Learning with Proximal Restrictions in Federated Learning"

    Language:Python10100
  • BThameur/FL-for-Smart-Healthcare

    Experiments of the FL in Healthcare project - MRI images use case - using Flower

    Language:Jupyter Notebook1000
  • gautamHCSCV/Federated-Learning-Methods-Comparison

    We utilize the Adversarial Model Perturbations (AMP) regularizer to regularize clients’ models. The AMP regulzaizer is based on perturbing the model parameters so as to get a more generalized model. The claim of AMP regularizer is to reach flat minima and therefore is expected to reach flat minima in FL settings as well.

    Language:Python0100