Tebmer/Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
Stargazers
- 4IK1d
- air-teaShenzhen
- alphadlJD Explore Academy, JD.com Inc.
- Attention-is-All-I-Need
- caoshijie0501
- chongyangtao
- ColdFusion2001
- dercaft
- guoyinwang
- himylUniversity of Technology Sydney
- hjblearning
- ibrahimsharaf@Cision
- Kefan-paulineRennes
- KevinDocelTsinghua University
- kylezhangwei
- lamducanhndgv
- liuqidong07XJTU & CityU
- lucasliunju
- MingLiiiiUniverisity of Maryland
- MRHMisuUniversity of California Irvine
- PenutChen
- pprpData Science and Analytic Thrust, Information Hub, HKUST(GZ)
- ruogu-alter
- songmzhangBeijing Jiaotong University
- stephenjia
- stevenyangyjUniversity of Technology Sydney
- taoshen58Sydney, Australia
- Tebmer
- tianyizhouUniversity of Maryland
- tnlinAlibaba Tongyi
- tokestermwCresta
- tooxin
- wenlai-lavineCIT, TUM
- wisheverChinese Academy of Sciences University
- Xiang-Li-ossInstitute of Automation, Chinese Academy of Sciences
- Yibin-Lei