This repository provides the code for the paper Multi-Scale Dense Convolutional Networks for Efficient Prediction.
This paper studies convolutional networks that require limited computational resources at test time. We develop a new network architecture that performs on par with state-of-the-art convolutional networks, whilst facilitating prediction in two settings: (1) an anytime-prediction setting in which the network's prediction for one example is progressively updated, facilitating the output of a prediction at any time; and (2) a batch computational budget setting in which a fixed amount of computation is available to classify a set of examples that can be spent unevenly across 'easier' and 'harder' examples.
Figure 1: MSDNet layout.
Figure 2: Anytime prediction on ImageNet.
Figure 3: Prediction under batch computational budget on ImageNet.
Figure 4: Random example images from the ImageNet classes Red wine and Volcano. Top row: images exited from the first classification layer of an MSDNet with correct prediction; Bottom row: images failed to be correctly classified at the first classifier but were correctly predicted and exited at the last layer.
Our code is written under the framework of Torch ResNet (https://github.com/facebook/fb.resnet.torch). The training scripts come with several options, which can be listed with the --help
flag.
th main.lua --help
In all the experiments, we use a validation set for model selection. We hold out 5000
training images on CIFAR, and
50000
images on ImageNet as the validation set.
Train an MSDNet with 10 classifiers attached to every other layer for anytime prediction:
th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 10 -stepmode even -step 2 -base 4
Train an MSDNet with 7 classifiers with the span linearly increases for efficient batch computation:
th main.lua -netType msdnet -dataset cifar10 -batchSize 64 -nEpochs 300 -nBlocks 7 -stepmode lin_grow -step 1 -base 1