/powersgd

Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727

Primary LanguagePythonMIT LicenseMIT

Watchers

No one’s watching this repository yet.