This repository implements baseline algorithms for Federated Learning.
Currently, available baselines are:
Baselines are selected using the --fed_type
option. The default is FedAvg
Experiemnts were conducted matching as closely as possible to the federated hyper-parameters provided in Adaptive Federated Optimization using the Cifar-10 dataset.
- 4000 global rounds
- 500 training samples per client
- 1 local epoch
- batch size of 20
Baseline | Accuracy (avg) +/- std | Run Command to Obtain Results in Prev Column |
---|---|---|
FedAvg | 73.8 +/- 1.3 | federated_main.py --epochs=4000 --client_lr=0.0316 --num_clients=450 --frac=0.023 --local_ep=1 --local_bs=20 --num_workers=16 |
FedAvgM | 84.1 (only 1 sample) | federated_main.py --fed_type=fedavgm --epochs=4000 --client_lr=0.003 --num_clients=450 --frac=0.023 --momentum=0.9 --local_ep=1 --local_bs=20 --num_workers=8 |
FedADAM | 78.0 +/- 2.2 | federated_main.py --epochs=4000 --fed_type=fedadam --global_lr=0.0316 --beta1=0.9 --beta2=0.999 --adam_eps=0.01 --client_lr=0.01 --num_clients=450 --frac=0.023 --local_ep=1 --local_bs=20 --num_workers=16 |
sample training curves and their final accuracies for each of the three baselines currently implemented