/awesome-compression-papers

Paper collection about model compression and acceleration: Pruning, Quantization, Knowledge Distillation, Low Rank Factorization, etc

Awesome Compression Papers

  • Paper collection about model compression and acceleration:
    • 1.Pruning

      • 1.1. Filter Pruning
      • 1.2. Weight Pruning
    • 2.Quantization

      • 2.1. Multi-bit Quantization
      • 2.2. 1-bit Quantization
    • 3.Light-weight Design

    • 4.Knowledge Distillation

    • 5.Tensor Decomposition

    • 6.Other

2020

2020-CVPR

1. Pruning

1.1. Filter Pruning
1.2. Weight Pruning

2. Quantization

2.1. Multi-bit quantization
2.2. 1-bit quantization

3. Light-weight Design

4. Knowledge Distillation

5. Tensor Decomposition

6. Other

2020-ECCV

  • 2020-ECCV-EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning
  • 2020-ECCV-ReActNet: Towards Precise Binary Neural Network with Generalized Activation Functions
  • 2020-ECCV-Knowledge Distillation Meets Self-Supervision
  • 2020-ECCV-Differentiable Feature Aggregation Search for Knowledge Distillation
  • 2020-ECCV-Post-Training Piecewise Linear Quantization for Deep Neural Networks
  • 2020-ECCV-GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
  • 2020-ECCV-Online Ensemble Model Compression using Knowledge Distillation
  • 2020-ECCV-Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
  • 2020-ECCV-DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation
  • 2020-ECCV-Accelerating CNN Training by Pruning Activation Gradients
  • 2020-ECCV-DHP: Differentiable Meta Pruning via HyperNetworks
  • 2020-ECCV-Differentiable Joint Pruning and Quantization for Hardware Efficiency
  • 2020-ECCV-Meta-Learning with Network Pruning
  • 2020-ECCV-BATS: Binary ArchitecTure Search
  • 2020-ECCV-Learning Architectures for Binary Networks
  • 2020-ECCV-DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search
  • 2020-ECCV-Knowledge Transfer via Dense Cross-Layer Mutual-Distillation
  • 2020-ECCV-Generative Low-bitwidth Data Free Quantization
  • 2020-ECCV-HMQ: Hardware Friendly Mixed Precision Quantization Block for CNNs
  • 2020-ECCV-Search What You Want: Barrier Panelty NAS for Mixed Precision Quantization
  • 2020-ECCV-Rethinking Bottleneck Structure for Efficient Mobile Network Design
  • 2020-ECCV-PSConv: Squeezing Feature Pyramid into One Compact Poly-Scale Convolutional Layer
  • ...

2020-NIPS

2020-ICML

2020-ICLR

2020-AAAI

2019

2019-CVPR

2019-ICCV

2019-NIPS

2019-ICML

2019-ICLR

2018

2018-CVPR

2018-ECCV

2018-NIPS

2018-ICML

2018-ICLR

Refer

https://github.com/MingSun-Tse/EfficientDNNs

https://github.com/danielmcpark/awesome-pruning-acceleration

https://github.com/csyhhu/Awesome-Deep-Neural-Network-Compression

https://github.com/he-y/Awesome-Pruning