ahmedcs
aka. Ahmed M. Abdelmoniem - Associate Professor at QMUL, UK - Head of SAYED Systems Group - Interested in ML, networking and Distributed Systems
Queen Mary University of LondonLondon
Pinned Repositories
adaptive-federated-learning
Code for paper "Adaptive Federated Learning in Resource Constrained Edge Computing Systems"
grace
GRACE - GRAdient ComprEssion for distributed deep learning
HyGenICC
Hypervisor-based Generic Congestion Control for Data Centres
Practical_FL_Tutorial
Advancing Federated Learning in Practice: From Theory to Real-World Edge Applications
REFL
Resource Efficient Federated Learning
RWNDQ
RWNDQ is an Equal Share Allocation Switch Design for Data Centre Networks
SICC
SDN-based Incast Congestion Control for Data Centers
SIDCo
SIDCo is An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems
T-RACKs
Timely ACKs Retransmission for Data Centres
TCP_loss_monitor
TCP socket-level monitoring of socket events (i.e, Open, Close, CWND, Fast Retransmit, Retransmission Timeouts, .. etc)
ahmedcs's Repositories
ahmedcs/deep-gradient-compression
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
ahmedcs/grace
GRACE - GRAdient ComprEssion for distributed deep learning
ahmedcs/acme
A library of reinforcement learning components and agents
ahmedcs/ai-research
ahmedcs/artemis-bidirectional-compression
Artemis: fast convergence guarantees for bidirectional compression in Federated Learning
ahmedcs/BatchCrypt
ahmedcs/cgau-client-adaptation
ahmedcs/CollaborativeFairFederatedLearning
Official implementation of our work "Collaborative Fairness in Federated Learning."
ahmedcs/CommEfficient
PyTorch for benchmarking communication-efficient distributed SGD optimization algorithms
ahmedcs/DataPoisoning_FL
Code for Data Poisoning Attacks Against Federated Learning Systems
ahmedcs/ddl-benchmarks
ddl-benchmarks: Benchmarks for Distributed Deep Learning
ahmedcs/deepspeech.pytorch
Speech Recognition using DeepSpeech2.
ahmedcs/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
ahmedcs/Dem-AI
Democratized Learning
ahmedcs/edge-consensus-learning
P2P Distributed deep learning framework that runs on PyTorch.
ahmedcs/federated_adaptation
Salvaging Federated Learning by Local Adaptation
ahmedcs/FedMA
Code for Federated Learning with Matched Averaging, ICLR 2020.
ahmedcs/FedProx-1
Federated Optimization in Heterogeneous Networks (MLSys '20)
ahmedcs/FPPDL
code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"
ahmedcs/GaussianK-SGD
Understanding Top-k Sparsification in Distributed Deep Learning
ahmedcs/HashingDeepLearning
Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"
ahmedcs/KD_UDA
ahmedcs/leaf
Leaf: A Benchmark for Federated Settings
ahmedcs/LG-FedAvg
Federated Learning with Local and Global Representations
ahmedcs/non_iid_dml
ahmedcs/oneflow
OneFlow is a performance-centered and open-source deep learning framework.
ahmedcs/Split-Learning-and-Federated-Learning
Investigating Split Learning and Federate Learning
ahmedcs/Synaptic-Flow
ahmedcs/toolbox
Curated list of libraries for a faster machine learning workflow
ahmedcs/ZeroShotKnowledgeTransfer
Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"