hanyxu's Stars
mechanicalsea/lighthubert
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
glory20h/FitHuBERT
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning (INTERSPEECH 2022)
pyf98/DPHuBERT
INTERSPEECH 2023: "DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models"
antspy/quantized_distillation
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
ModelTC/L2_Compression
microsoft/LQ-Nets
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Cheeun/DAQ-pytorch
[WACV2022] Official Code for the "DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks"
deJQK/FracBits
Neural Network Quantization With Fractional Bit-widths
anbingxu666/WangDao-DataStructure
《数据结构》经典算法代码
Qualcomm-AI-research/outlier-free-transformers
nicholasmireles/DotDict
A simple Python library to make chained attributes possible.
zkkli/PSAQ-ViT
[ECCV 2022] Patch Similarity Aware Data-Free Quantization for Vision Transformers
Qualcomm-AI-research/BayesianBits
google/praxis
Qualcomm-AI-research/pruning-vs-quantization
liuzechun/Nonuniform-to-Uniform-Quantization
Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via Generalized Straight-Through Estimation. In CVPR 2022.
sony/ai-research-code
AMLab-Amsterdam/L0_regularization
Learning Sparse Neural Networks through L0 regularization
FLHonker/Awesome-Knowledge-Distillation
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
he-y/Awesome-Pruning
A curated list of neural network pruning resources.
hypasd-art/KDM
aojunzz/NM-sparsity
IST-DASLab/OBC
Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".
facebookarchive/fbpca
Fast Randomized PCA/SVD
PaddlePaddle/PaddleClas
A treasure chest for visual classification and recognition powered by PaddlePaddle
KwangHoonAn/PACT
Reproducing Quantization paper PACT
IntelLabs/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
microsoft/DeepSpeedExamples
Example models using DeepSpeed
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
ricky40403/DSQ
pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"