Pinned Repositories
alexnet-pytorch
Pytorch Implementation of AlexNet
Awesome-Federated-Learning
Federated Learning Library: https://fedml.ai
CMFL
CMFL: Mitigating Communication Overhead for Federated Learning / PyTorch reimplementation.
CNNs_HAR_and_HR
This repository is an artifact for the paper "CNNs for Heart Rate Estimation and Human Activity Recognition in Wrist Worn Sensing Applications" submitted to the WristSense workshop as part of PerCom 2020.
Communication-efficient-federated-continual-learning
Communication-efficient federated continual learning
Communication-Efficient-Federated-Learning
deep-gradient-compression
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
deep-learning-with-python-notebooks
Jupyter notebooks for the code samples of the book "Deep Learning with Python"
DisPFL
[ICML 2022] "DisPFL: Towards Communication-Efficient Personalized Federated learning via Decentralized Sparse Training"
examples
TensorFlow examples
KOUDA-AMINE's Repositories
KOUDA-AMINE/FedPSO
FedPSO: Federated Learning Using Particle Swarm Optimization to Reduce Communication Costs
KOUDA-AMINE/signSGD
Code for the signSGD paper
KOUDA-AMINE/CNNs_HAR_and_HR
This repository is an artifact for the paper "CNNs for Heart Rate Estimation and Human Activity Recognition in Wrist Worn Sensing Applications" submitted to the WristSense workshop as part of PerCom 2020.
KOUDA-AMINE/pyCSalgos
Python Compressed Sensing algorithms
KOUDA-AMINE/fl_public
KOUDA-AMINE/federated-meta
Unofficial Pytorch implementation of "Federated Meta-Learning with Fast Convergence and Efficient Communication"
KOUDA-AMINE/kotlin-dsl-samples
Samples builds using the Gradle Kotlin DSL
KOUDA-AMINE/FedMA
Code for Federated Learning with Matched Averaging, ICLR 2020.
KOUDA-AMINE/deep-gradient-compression
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
KOUDA-AMINE/Federated_Learning_Horizontal
An Implementation of the Federated Averaging Algorithm as described in the Paper - Communication-Efficient Learning of Deep Networks from Decentralized Data by H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
KOUDA-AMINE/fedavgpy
On the Convergence of FedAvg on Non-IID Data
KOUDA-AMINE/CMFL
CMFL: Mitigating Communication Overhead for Federated Learning / PyTorch reimplementation.
KOUDA-AMINE/TwoStreamFederatedLearning
The implementation of "Two-Stream Federated Learning: Reduce the Communication Costs" (VCIP 2018)
KOUDA-AMINE/federated-learning
KOUDA-AMINE/signSGD-with-Majority-Vote
KOUDA-AMINE/keras-mnist-tutorial
For a mini tutorial at U of T, a tutorial on MNIST classification in Keras.
KOUDA-AMINE/openthread
OpenThread released by Nest is an open-source implementation of the Thread networking protocol