Model_Compression_Paper
Type of Pruning
Type | F |
W |
Other |
---|---|---|---|
Explanation | Filter pruning | Weight pruning | other types |
Conf |
2015 |
2016 |
2017 |
2018 |
2019 |
2020 |
2021 |
---|---|---|---|---|---|---|---|
AAAI |
539 |
548 |
649 |
938 |
1147 |
1591 |
1692 |
CVPR |
602(71) |
643(83) |
783(71) |
979(70) |
1300(288) |
1470(335) |
|
NeurIPS |
479 |
645 |
954 |
1011 |
1428 |
1900 (105) |
|
ICLR |
oral-15 |
198 |
336(23) |
502(24) |
687 |
860(53) |
|
ICML |
433 |
621 |
774 |
1088 |
|||
IJCAI |
572 |
551 |
660 |
710 |
850 |
592 |
|
ICCV |
- |
621 |
- |
1077 | - |
||
ECCV |
415 |
- |
778 |
- |
1360 |
||
MLsys |
MLsys
:https://proceedings.mlsys.org/paper/2019
ICCV
https://dblp.org/db/conf/iccv/iccv2019.html
ICCV
https://dblp.org/db/conf/iccv/iccv2017.html
ECCV
https://link.springer.com/conference/eccv
ECCV
https://zhuanlan.zhihu.com/p/157569669
CVPR
https://dblp.org/db/conf/cvpr/cvpr2020.html
ICDE
ECAI
ACCV
WACV
BMVC
WACV
:(Applications of Computer Vision)
量化2015 & 2016 & 2017
Title | Venue | Type | Notion |
---|---|---|---|
HWGQ-Deep Learning With Low Precision by Half-wave Gaussian Quantization | CVPR |
孙剑 | |
Weighted-Entropy-based Quantization for Deep Neural Networks | CVPR |
not code |
|
WRPN Wide Reduced-Precision Networks | ICLR |
intel +distiller框架集成 |
|
DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients | ICLR |
超低bit | |
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | ECCV |
超低bit | |
Binaryconnect Training deep neural networks with binary weights during propagations | NeurIPS |
超低bit | |
INQ-Incremental network quantization Towards lossless cnns with low-precision weight | ICLR |
intel |
剪枝 2017
Title | Venue | Type | Notion |
---|---|---|---|
Pruning Filters for Efficient ConvNets | ICLR | F |
abs(filter) |
Pruning Convolutional Neural Networks for Resource Efficient Inference | ICLR | F |
基于一阶泰勒展开近似 |
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression | ICCV | F |
找一组channel近似全集 |
Channel pruning for accelerating very deep neural networks | ICCV | F |
LASSO回归、孙剑 |
Learning Efficient Convolutional Networks Through Network Slimming | ICCV | F |
基于BN层 |
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee | NeurIPS | W |
还没看 |
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon | NeurIPS | W |
还没看 |
Runtime Neural Pruning | NeurIPS | F |
还没看 |
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning | CVPR | F |
还没看 |
量化 2018
Title | Venue | Type | Notion |
---|---|---|---|
PACT: Parameterized Clipping Activation for Quantized Neural Networks | ICLR |
||
Scalable methods for 8-bit training of neural networks | NeurIPS |
||
Two-step quantization for low-bit neural networks | CVPR |
||
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference | CVPR |
QAT和fold Bn | |
Joint training of low-precision neural network with quantization interval Parameters | NeurIPS |
samsung | |
Lq-nets Learned quantization for highly accurate and compact deep neural networks | ECCV |
||
剪枝 2018
量化 2019
Title | Venue | Type | Notion |
---|---|---|---|
ACIQ-Analytical Clipping for Integer Quantization of Neural Networks | ICLR |
||
OCS-Improving Neural Network Quantization without Retraining using Outlier Channel Splitting. | ICML |
||
Data-Free Quantization Through Weight Equalization and Bias Correction | ICCV (Oral) |
||
2019
量化 2020
Title | Venue | Type | Notion |
---|---|---|---|
Precision Gating Improving Neural Network Efficiency with Dynamic Dual-Precision Activations | ICLR |
||
Post-training Quantization with Multiple Points Mixed Precision without Mixed Precision | ICML |
||
Towards Unified INT8 Training for Convolutional Neural Network | CVPR |
商汤bp+qat | |
APoT-addive powers-of-two quantization an efficient non-uniform discretization for neural networks | ICLR |
非线性量化scheme | |
Post-Training Piecewise Linear Quantization for Deep Neural Networks | ECCV (oral) |
||
Training Quantized Neural Networks With a Full-Precision Auxiliary Module. | CVPR (oral) |
||
剪枝 2020
蒸馏 2020
Title | Venue | Type | Notion |
---|---|---|---|
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation From a Blackbox Model. | CVPR (oral) |
||
Pruning by Heyang
https://github.com/he-y/Awesome-Pruning#2018
https://github.com/MingSun-Tse/EfficientDNNs
Papers-Lottery Ticket Hypothesis (LTH)
- 2019-ICLR-The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (best paper!)
- 2019-NIPS-Deconstructing lottery tickets: Zeros, signs, and the supermask
- 2019-NIPS-One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
- 2020-ICLR-GraSP: Picking Winning Tickets Before Training By Preserving Gradient Flow [Code]
- 2020-ICLR-Playing the lottery with rewards and multiple languages: Lottery tickets in rl and nlp
- 2020-ICLR-The Early Phase of Neural Network Training
- 2020-The Sooner The Better: Investigating Structure of Early Winning Lottery Tickets
- 2020-ICML-Proving the Lottery Ticket Hypothesis: Pruning is All You Need
- 2020-ICML-Rigging the Lottery: Making All Tickets Winners [Code]
- 2020-ICML-Linear Mode Connectivity and the Lottery Ticket Hypothesis
- 2020-ICML-Finding trainable sparse networks through neural tangent transfer
- 2020-NIPS-Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
- 2020-ICLRo-Comparing Rewinding and Fine-tuning in Neural Network Pruning [Code]
- 2020-NIPS-Logarithmic Pruning is All You Need
- 2020-NIPS-Winning the Lottery with Continuous Sparsification
- 2020.2-Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration
Papers-Bayesian Compression
- 1995-Neural Computation-Bayesian Regularisation and Pruning using a Laplace Prior
- 1997-Neural Networks-Regularization with a Pruning Prior
- 2015-NIPS-Bayesian dark knowledge
- 2017-NIPS-Bayesian Compression for Deep Learning [Code]
- 2017-ICML-Variational dropout sparsifies deep neural networks
- 2017-NIPSo-Structured Bayesian Pruning via Log-Normal Multiplicative Noise
- 2017-ICMLw-Bayesian Sparsification of Recurrent Neural Networks
- 2020-NIPS-Bayesian Bits: Unifying Quantization and Pruning