pruning_paper

Paper reviews for Awesome Pruning Awesome

Type of Pruning

Type F W Other
Explanation Filter pruning Weight pruning other types

2021

Title Venue Type Code
A Probabilistic Approach to Neural Network Pruning ICML F -
  • 1
  • 2
  • 3
Accelerate CNNs from Three Dimensions: A Comprehensive Pruning Framework ICML F -
Group Fisher Pruning for Practical Network Compression ICML F PyTorch(Author)
feature map의 importance를 계산하는 방법을 제안함
🔥 On the Predictability of Pruning Across Scales ICML W -
어느정도 pruning 하면 성능이 얼마나 나올지 계산하는 방법 제안함.
Towards Compact CNNs via Collaborative Compression CVPR F PyTorch(Author)
  • channel pruning + Tensor decomposition을 제안함. 이건 나랑 상관 없음
  • Whole network compression rate $C$ 가 주어졌을 때, 레이어별compression rate $R^i$를 automatically 선택하는 global compression rate optimization method을 제안함.
Content-Aware GAN Compression CVPR F PyTorch(Author)
GAN에 특화된 방법. 나랑 관련 없음.
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks CVPR F PyTorch(Author)
Computational efficiency가 목적. Vector Quantization에 관한 연구. 나와 방향성이 다름
NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile Acceleration CVPR F -
Mobile platform에 특화하기 위해서 Compiler erfficient pruning을 제안함. 동시에 NAS를 적용. 나와 방향성이 다름.
Network Pruning via Performance Maximization CVPR F -
  • loss를 minimize하는 것이 best accuracy를 보장하지 않기 때문에, accuracy 예상값을 maximize 하는 loss function을 제안함.
  • Sub-network(=pruned network)의 accuracy 예상값을 얻기 위해서 performance prediction network (PN)를 고안함. PN은 상기 loss function에 사용됨.
  • PN을 안정적으로 학습시키기 위해서 episodic memory를 제안함
Convolutional Neural Network Pruning with Structural Redundancy Reduction CVPR F -
  • Least important filter를 pruning 하는 것 보다, structural redundanc를 기준으로 pruning 하는 것이 더 성능이 좋다고 주장.
  • sturctural redundancy : layer의 redundancy를 구하고 redundancy가 큰 layer의 unimportant filter를 제거함.
Manifold Regularized Dynamic Network Pruning CVPR F -
  • input마다 다르게 pruning하는 방법
  • ipnut 간의 relation을 고려하기 위해서 input의 manifold information을 pruned netowkr space에 embedding하는 방법을 제안.
  • 경험상 dynamic approach 또는 attentrion은 EEG에 효과가 없음
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation CVPR FO -
Detection task에서 NAS를 효율적으로 사용하기 위해서 Pruning과 Distilation을 활용하는 방법을 제안. 내 방향성과 맞지않음
A Gradient Flow Framework For Analyzing Network Pruning ICLR F PyTorch(Author)
  • 잘 모르겠지만 standard method를 알려줘서 봐봄직함.
  • method를 propose하는 건 아닌 것 같고 해석 view를 제시하는 듯하다.
  • importance-measure based prunning 방법을 training 중에 적용해서 training과 pruning을 동시에 하는 방법들이 최근 많이 제안되었는데, 그에 대한 이론적 근거를 분석한 것 같다.
  • importance-measure based pruning에 대한 소개를 읽고 적용해보면 좋겠다. (이 분야의 sota일 것임)
🔥 Neural Pruning via Growing Regularization ICLR F PyTorch(Author)
Method는 잘 모르겠지만 코드가 공개되어있고 구현이 쉽다고 한다.
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations ICLR F PyTorch(Author)
Network Pruning That Matters: A Case Study on Retraining Variants ICLR F PyTorch(Author)
Pruned network를 retraining할 때 learning rate를 잘 설정하는 것이 중요함을 발견. 크게 설정하는 것이 좋으며, pruned model을 learning rate 설정 잘 함으로써 pruning을 하는 method보다 성능이 좋다고 함. 흥미로워보이지만 나랑 방향이 다름
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network ICLR W PyTorch(Author)
충분히 over-parametrized NN with random initialization을 잘 pruning하면 학습없이도 학습한 모델 만큼의 성능을 낼 수 있다는 가설. 나와 방향성이 다름
🔥 Layer-adaptive Sparsity for the Magnitude-based Pruning ICLR W PyTorch(Author)
ㅣlayer별로 얼마나 prune할지 선택하는 방법 제안. 나와 방향이 비슷.
Pruning Neural Networks at Initialization: Why Are We Missing the Mark? ICLR W -
Layer에서 pruning 비율을 찾는 것보다, weight 자체를 prune할 지 결정하는 방법을 제시함.
~~Robust Pruning at Initialization ~~ ICLR W -
Prune at initialization 은 나와 방향이 안맞음.

2020

Title Venue Type Code
HYDRA: Pruning Adversarially Robust Neural Networks NeurIPS W PyTorch(Author)
Logarithmic Pruning is All You Need NeurIPS W -
Directional Pruning of Deep Neural Networks NeurIPS W -
Movement Pruning: Adaptive Sparsity by Fine-Tuning NeurIPS W PyTorch(Author)
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot NeurIPS W PyTorch(Author)
Neuron Merging: Compensating for Pruned Neurons NeurIPS F PyTorch(Author)
Neuron-level Structured Pruning using Polarization Regularizer NeurIPS F PyTorch(Author)
SCOP: Scientific Control for Reliable Neural Network Pruning NeurIPS F PyTorch(Author)
Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning NeurIPS F -
The Generalization-Stability Tradeoff In Neural Network Pruning NeurIPS F PyTorch(Author)
Pruning Filter in Filter NeurIPS Other PyTorch(Author)
Position-based Scaled Gradient for Model Quantization and Pruning NeurIPS Other PyTorch(Author)
Bayesian Bits: Unifying Quantization and Pruning NeurIPS Other -
Pruning neural networks without any data by iteratively conserving synaptic flow NeurIPS Other PyTorch(Author)
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning ECCV (Oral) F PyTorch(Author)
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation ECCV F -
DHP: Differentiable Meta Pruning via HyperNetworks ECCV F PyTorch(Author)
Meta-Learning with Network Pruning ECCV W -
Accelerating CNN Training by Pruning Activation Gradients ECCV W -
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search ECCV Other -
Differentiable Joint Pruning and Quantization for Hardware Efficiency ECCV Other -
Channel Pruning via Automatic Structure Search IJCAI F PyTorch(Author)
Adversarial Neural Pruning with Latent Vulnerability Suppression ICML W -
Proving the Lottery Ticket Hypothesis: Pruning is All You Need ICML W -
Soft Threshold Weight Reparameterization for Learnable Sparsity ICML WF Pytorch(Author)
Network Pruning by Greedy Subnetwork Selection ICML F -
Operation-Aware Soft Channel Pruning using Differentiable Masks ICML F -
DropNet: Reducing Neural Network Complexity via Iterative Pruning ICML F -
Towards Efficient Model Compression via Learned Global Ranking CVPR (Oral) F Pytorch(Author)
모든 filter를 globally ranking 매겨서 (across layers) 필요에 따라 prune해서 acc-speed trade off의 sweet spot을 찾음. 방향 안맞음
HRank: Filter Pruning using High-Rank Feature Map CVPR (Oral) F Pytorch(Author)
feature map을 SVD해서 rank를 구함. 적용 불가.
Neural Network Pruning with Residual-Connections and Limited-Data CVPR (Oral) F -
Multi-Dimensional Pruning: A Unified Framework for Model Compression CVPR (Oral) WF -
DMCP: Differentiable Markov Channel Pruning for Neural Networks CVPR (Oral) F TensorFlow(Author)
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression CVPR F PyTorch(Author)
Few Sample Knowledge Distillation for Efficient Network Compression CVPR F -
Discrete Model Compression With Resource Constraint for Deep Neural Networks CVPR F -
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization CVPR W -
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration CVPR F -
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy CVPR F -
Comparing Rewinding and Fine-tuning in Neural Network Pruning ICLR (Oral) WF TensorFlow(Author)
A Signal Propagation Perspective for Pruning Neural Networks at Initialization ICLR (Spotlight) W -
ProxSGD: Training Structured Neural Networks under Regularization and Constraints ICLR W TF+PT(Author)
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation ICLR W -
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning ICLR W PyTorch(Author)
Dynamic Model Pruning with Feedback ICLR WF -
Provable Filter Pruning for Efficient Neural Networks ICLR F -
Data-Independent Neural Pruning via Coresets ICLR W -
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates AAAI F -
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks AAAI Other -
Pruning from Scratch AAAI Other -
Reborn filters: Pruning convolutional neural networks with limited data AAAI F -

2019

Title Venue Type Code
Network Pruning via Transformable Architecture Search NeurIPS F PyTorch(Author)
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks NeurIPS F PyTorch(Author)
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask NeurIPS W TensorFlow(Author)
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers NeurIPS W -
Global Sparse Momentum SGD for Pruning Very Deep Neural Networks NeurIPS W PyTorch(Author)
AutoPrune: Automatic Network Pruning by Regularizing Auxiliary Parameters NeurIPS W -
Model Compression with Adversarial Robustness: A Unified Optimization Framework NeurIPS Other PyTorch(Author)
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning ICCV F PyTorch(Author)
Accelerate CNN via Recursive Bayesian Pruning ICCV F -
Adversarial Robustness vs Model Compression, or Both? ICCV W PyTorch(Author)
Learning Filter Basis for Convolutional Neural Network Compression ICCV Other -
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration CVPR (Oral) F PyTorch(Author)
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning CVPR F PyTorch(Author)
Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure CVPR F PyTorch(Author)
On Implicit Filter Level Sparsity in Convolutional Neural Networks, Extension1, Extension2 CVPR F PyTorch(Author)
Structured Pruning of Neural Networks with Budget-Aware Regularization CVPR F -
🔥 Importance Estimation for Neural Network Pruning CVPR F PyTorch(Author)
weight를 버렸을 때 loss 증가량을 이용해서 importance를 계산함. 정해진 비율만큼 pruning
OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks CVPR F -
Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search CVPR Other TensorFlow(Author)
Variational Convolutional Neural Network Pruning CVPR - -
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks ICLR (Best) W TensorFlow(Author)
Rethinking the Value of Network Pruning ICLR F PyTorch(Author)
Dynamic Channel Pruning: Feature Boosting and Suppression ICLR F TensorFlow(Author)
SNIP: Single-shot Network Pruning based on Connection Sensitivity ICLR W TensorFLow(Author)
Dynamic Sparse Graph for Efficient Deep Learning ICLR F CUDA(3rd)
Collaborative Channel Pruning for Deep Networks ICML F -
Approximated Oracle Filter Pruning for Destructive CNN Width Optimization github ICML F -
EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis4 ICML W PyTorch(Author)
COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning IJCAI F Tensorflow(Author)

2018

Title Venue Type Code
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers ICLR F TensorFlow(Author), PyTorch(3rd)
To prune, or not to prune: exploring the efficacy of pruning for model compression ICLR W -
Discrimination-aware Channel Pruning for Deep Neural Networks NeurIPS F TensorFlow(Author)
Frequency-Domain Dynamic Pruning for Convolutional Neural Networks NeurIPS W -
Learning Sparse Neural Networks via Sensitivity-Driven Regularization NeurIPS WF -
Amc: Automl for model compression and acceleration on mobile devices ECCV F TensorFlow(3rd)
Data-Driven Sparse Structure Selection for Deep Neural Networks ECCV F MXNet(Author)
Coreset-Based Neural Network Compression ECCV F PyTorch(Author)
Constraint-Aware Deep Neural Network Compression ECCV W SkimCaffe(Author)
A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers ECCV W Caffe(Author)
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning CVPR F PyTorch(Author)
NISP: Pruning Networks using Neuron Importance Score Propagation CVPR F -
CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-Quantization CVPR W -
“Learning-Compression” Algorithms for Neural Net Pruning CVPR W -
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks IJCAI F PyTorch(Author)
사전에 설정한 비율에 따라서 zeroize 했다.
Accelerating Convolutional Networks via Global & Dynamic Filter Pruning IJCAI F -

2017

Title Venue Type Code
Pruning Filters for Efficient ConvNets ICLR F PyTorch(3rd)
Pruning Convolutional Neural Networks for Resource Efficient Inference ICLR F TensorFlow(3rd)
Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee NeurIPS W TensorFlow(Author)
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon NeurIPS W PyTorch(Author)
Runtime Neural Pruning NeurIPS F -
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning CVPR F -
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression ICCV F Caffe(Author), PyTorch(3rd)
Channel pruning for accelerating very deep neural networks ICCV F Caffe(Author)
Learning Efficient Convolutional Networks Through Network Slimming ICCV F PyTorch(Author)

2016

Title Venue Type Code
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding ICLR (Best) W Caffe(Author)
Dynamic Network Surgery for Efficient DNNs NeurIPS W Caffe(Author)

2015

Title Venue Type Code
Learning both Weights and Connections for Efficient Neural Networks NeurIPS W PyTorch(3rd)