This is the official implementation of the experiments in the following paper:
Naoyuki Terashita and Satoshi Hara
Decentralized Hyper-Gradient Computation over Time-Varying Directed Networks
arXiv:2210.02129 (under review), 2023
Our experimental results were made in a NVIDIA Docker container built
from docker/Dockerfile
.
The container can be obtained by the following steps:
docker build ./docker/ --tag pdbo-hgp
nvidia-docker run -it -u user -v $PWD/pdbo-hgp:/home/user/pdbo-hgp -w /home/user/pdbo-hgp pdbo-hgp /bin/bash
# Full-batch g_i
python main.py PlotZipComputeHyperGradErrorOfSteps config.paper.error_synth_fullbatch --local-scheduler
# Mini-batch g_i
python main.py PlotZipComputeHyperGradErrorOfSteps config.paper.error_synth_minibatch --local-scheduler
# Logistic Regression with full-batch g_i
python main.py CompareApproxActualDiffByMostInfluentialPerturbs config.paper.infl_toy --local-scheduler
# CNN with full-batch g_i
python main.py CompareApproxActualDiffByMostInfluentialPerturbs config.paper.infl_emnist_digits_fullbatch --local-scheduler
# CNN with mini-batch g_i
python main.py CompareApproxActualDiffByMostInfluentialPerturbs config.paper.infl_emnist_digits_minibatch --local-scheduler
# Experiments on the fully-connected and static undirected communication networks
## HGP-PL and HGP-MTL
python main.py MakeAccuracyTableHyperSGDOnFedEmSetting config.paper.personalization_fedem_hgp --local-scheduler
## Baselines with hyperparameter tuning
python main.py MakeAccuracyTableBaselineOnFedEmSetting config.paper.personalization_fedem_baseline --local-scheduler
# Experiments on the random undirected and random directed communication networks
## HGP-PL and HGP-MTL
python main.py MakeAccuracyTableHyperSGD config.paper.personalization_sgp_hgp --local-scheduler
## Baselines
python main.py MakeAccuracyTableHyperSGD config.paper.personalization_sgp_baseline --local-scheduler
- 5 Oct 2022 (v1): Personalized Decentralized Bilevel Optimization over Stochastic and Directed Networks (Paper, Codes)
- 31 Jan 2023 (v2): Personalized Decentralized Bilevel Optimization over Random Directed Networks (Paper, Codes)
If you have questions, please contact Naoyuki Terashita (naoyuki.terashita.sk@hitachi.com).