uber-research/permute-quantize-finetune
Using ideas from product quantization for state-of-the-art neural network compression.
PythonNOASSERTION
Stargazers
- aashiqmuhamedCarnegie Mellon University
- agathauy
- alexanderepstein@amzn
- atuxhe
- B05901022NTUEE
- basicskywardsNCTU
- codestar12Austin
- CodingMice
- d-lowlNot UK
- felixfuu
- fgr1986
- fly51flyPRIS
- GallonDeng
- ggsonic
- hypnopump
- imbibekkSeoul
- janicevidal
- Jiakui
- josecohenca
- kentaroy47Keio University
- kuangliuZhejiang University
- lArklPeru
- lrjconan
- LZHgrlaPeking University
- maltanar@AMD Research Labs
- manakahasegawa
- michalwolsNew York
- NewCoderQ@Beijing University of Posts and Telecommunications
- nhannguyen2709Ho Chi Minh
- ninfueng@tamukohlaboratory
- phalanx-hkJapan
- sagorbrur@hishab-nlp
- seansegalWaabi / University of Toronto
- shurricanexInnoblock Technology Limited
- una-dinosauriaToronto, Canada
- yunchenloNational Tsing Hua University (NTHU)