synxlin/deep-gradient-compression
[ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
PythonApache-2.0
Stargazers
- akaanirbanCapital One
- amitport
- ChenMinQi
- enduringstack
- fly51flyPRIS
- han-caiMIT
- Hanrui-WangUCLA
- jasperzhongThe University of Hong Kong
- jiangzhengkaiHKUST
- josecohenca
- lliai
- lqu-001
- luzaiSFU
- Lyken17Cambridge, MA
- matouk98
- MichaelvllSky Computing Lab, UC Berkeley
- Ottovonxu
- p3i0tHong Kong
- pawopawo
- roger1993Hong Kong
- rorypeckChina
- shyhuaiHarbin Inistitute of Technology, Shenzhen
- SibozhuMIT HAN Lab
- sruthikesh-MUnvidia.com
- synxlinMIT
- tonylinsMIT, EECS
- tpgoodboyBeijing
- wangxicoding
- WHWH1996HKUST
- wm2012011492
- wuyujackNorthwestern University
- xieydd@Tensorchord
- yanring@NVIDIA
- ymchen7London
- yunchenloNational Tsing Hua University (NTHU)
- zhijian-liuResearch Scientist at @nvidia and Incoming Assistant Professor at @ucsd