/slimming

Learning Efficient Convolutional Networks through Network Slimming, In ICCV 2017.

Primary LanguageLua

Network Slimming

Example code for the paper Learning Efficient Convolutional Networks through Network Slimming (In ICCV 2017).

The code is based on fb.resnet.torch.

Citation:

@inproceedings{Liu2017learning,
	title = {Learning Efficient Convolutional Networks through Network Slimming},
	author = {Liu, Zhuang and Li, Jianguo and Shen, Zhiqiang and Huang, Gao and Yan, Shoumeng and Zhang, Changshui},
	booktitle = {ICCV},
	year = {2017}
}

Introduction

Network Slimming is a neural network training scheme that can simultaneously reduce the model size, run-time memory, computing operations, while introducing no accuracy loss to and minimum overhead to the training process. The resulting models require no special libraries/hardware for efficient inference.

Approach

Figure 1: The channel pruning process.

We associate a scaling factor (reused from batch normalization layers) with each channel in convolutional layers. Sparsity regularization is imposed on these scaling factors during training to automatically identify unimportant channels. The channels with small scaling factor values (in orange color) will be pruned (left side). After pruning, we obtain compact models (right side), which are then fine-tuned to achieve comparable (or even higher) accuracy as normally trained full network.


Figure 2: Flow-chart of the network slimming procedure. The dotted line is for the multi-pass version of the procedure.

Usage

This repo holds the example code for VGGNet on CIFAR-10 dataset.

To run the example, simply type

sh example.sh

More detailed instructions are included as comments in the file example.sh.

Contact

liuzhuangthu at gmail.com