/SENet

Squeeze-and-Excitation Networks

Apache License 2.0Apache-2.0

Squeeze-and-Excitation Networks (paper)

By Jie Hu[1], Li Shen[2], Gang Sun[1].

Momenta[1] and University of Oxford[2].

Approach

Figure 1: Diagram of a Squeeze-and-Excitation building block.

 

Figure 2: Schema of SE-Inception and SE-ResNet modules. We set r=16 in all our models.

Implementation

In this repository, Squeeze-and-Excitation Networks are implemented by Caffe.

Augmentation

Method Settings
Random Mirror True
Random Crop 8% ~ 100%
Aspect Ratio 3/4 ~ 4/3
Random Rotation -10° ~ 10°
Pixel Jitter -20 ~ 20

Note:

  • To improve compatibility with the official BVLC Caffe, I replaced Axpy layers with channel-wise Scale layers and Eltwise summation layers. You can use the official BVLC Caffe or OpenCV caffe implementation to run this model without any modifications.

Trained Models

Table 1. Single crop validation error on ImageNet-1k (center 224x224 crop from resized image with shorter side = 256). The SENet-154 is one of our superior models used in ILSVRC 2017 Image Classification Challenge where we won the 1st place (Team name: WMW).

Model Top-1 Top-5 Size Caffe Model Caffe Model
SE-BN-Inception 23.62 7.04 46 M GoogleDrive BaiduYun
SE-ResNet-50 22.37 6.36 107 M GoogleDrive BaiduYun
SE-ResNet-101 21.75 5.72 189 M GoogleDrive BaiduYun
SE-ResNet-152 21.34 5.54 256 M GoogleDrive BaiduYun
SE-ResNeXt-50 (32 x 4d) 20.97 5.54 105 M GoogleDrive BaiduYun
SE-ResNeXt-101 (32 x 4d) 19.81 4.96 187 M GoogleDrive BaiduYun
SENet-154 18.68 4.47 440 M GoogleDrive BaiduYun

Here we obtain better performance than those reported in the paper. We re-train the SENets described in the paper on a single GPU server with 8 NVIDIA Titan X cards, using a mini-batch of 256 and a initial learning rate of 0.1 with more epoches. In contrast, the results reported in the paper were obtained by training the networks with a larger batch size (1024) and learning rate (0.6) across 4 servers.

Third-party re-implementations

  1. Caffe. SE-mudolues are integrated with a modificated ResNet-50 using a stride 2 in the 3x3 convolution instead of the first 1x1 convolution which obtains better performance: Repository.
  2. TensorFlow. SE-modules are integrated with a pre-activation ResNet-50 which follows the setup in fb.resnet.torch: Repository.
  3. TensorFlow. Simple Tensorflow implementation of SENets using Cifar10: Repository.
  4. MatConvNet. All the released SENets are imported into MatConvNet: Repository.
  5. MXNet. SE-modules are integrated with the ResNeXt and more architectures are coming soon: Repository.
  6. PyTorch. Implementation of SENets by PyTorch: Repository.
  7. Chainer. Implementation of SENets by Chainer: Repository.

Citation

If you use Squeeze-and-Excitation Networks in your research, please cite the paper:

@inproceedings{hu2018senet,
  title={Squeeze-and-Excitation Networks},
  author={Jie Hu and Li Shen and Gang Sun},
  journal={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018}
}