By Jie Hu[1], Li Shen[2], Gang Sun[1]. (arxiv)
Momenta[1] and University of Oxford[2].
Figure 1: Diagram of a Squeeze-and-Excitation building block.
Figure 2: Schema of SE-Inception and SE-ResNet modules.
In this repository, Squeeze-and-Excitation Networks are implemented by Caffe.
Method | Settings |
---|---|
Random Mirror | True |
Random Crop | 8% ~ 100% |
Aspect Ratio | 3/4 ~ 4/3 |
Random Rotation | -10° ~ 10° |
Pixel Jitter | -20 ~ 20 |
-
For efficient training and testing, we combine the consecutive operations channel-wise scale and element-wise summation into a single layer "Axpy" in the architectures with skip-connections, resulting in considerable memory and time comsuming reduce.
-
Additonally, we found that the global average pooling implemented by cuDNN or BVLC/caffe is much slow on GPU. So we re-implement this operation on GPU and achieve a significant speedup.
Table 1. Single crop validation error on ImageNet-1k (center 224x224 crop from resized image with shorter side = 256). The SENet* is one of superior models used in ILSVRC 2017 Image Classification Challenge where we won the 1st place (Team name: WMW).
Model | Top-1 | Top-5 | Size | Caffe Model |
---|---|---|---|---|
SE-BN-Inception | 23.62 | 7.04 | 46 M | GoogleDrive |
SE-ResNet-50 | 22.37 | 6.36 | 107 M | GoogleDrive |
SE-ResNet-101 | 21.75 | 5.72 | 189 M | GoogleDrive |
SE-ResNet-152 | 21.34 | 5.54 | 256 M | GoogleDrive |
SE-ResNeXt-50 (32 x 4d) | 20.97 | 5.54 | 105 M | GoogleDrive |
SE-ResNeXt-101 (32 x 4d) | 19.81 | 4.96 | 187 M | GoogleDrive |
SENet* | 18.68 | 4.47 | 440 M | GoogleDrive |
Here we obtain better performances than those reported in the paper. We re-train all above models on a single GPU server equipped with 8 NVIDIA Titan X cards, using a mini-batch of 256 and a initial learning rate of 0.1 with more epoches. In our paper, we use large batch-size (1024) and learning rate (0.6).
If you use Squeeze-and-Excitation Networks in your research, please cite the paper:
@article{hu2017,
title={Squeeze-and-Excitation Networks},
author={Jie Hu and Li Shen and Gang Sun},
journal={arXiv preprint arXiv:},
year={2017}
}