Implementation of GoogLeNet-v2 [1] by chainer
git clone https://github.com/nutszebra/googlenet_v2.git
cd googlenet_v2
git submodule init
git submodule update
python main.py -p ./ -g 0
-
Data augmentation
Train: Pictures are randomly resized in the range of [256, 512], then 224x224 patches are extracted randomly and are normalized locally. Horizontal flipping is applied with 0.5 probability.
Test: Pictures are resized to 384x384, then they are normalized locally. Single image test is used to calculate total accuracy. -
Auxiliary classifiers
No implementation -
Learning rate
The description about the schedule of learning rate can't be found in [1], so as [2] said, learning rate are multiplied by 0.96 at every 8 epochs. Initial learning rate is 0.045 acoording to [1]. -
Weight decay
The description about weight decay can't be found in [1], so weight decay is 4.0*10^-5 as [3] says. -
Separable conv
Normal convolution is used.
network | depth | total accuracy (%) |
---|---|---|
my implementation | 32 | 94.89 |
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift [1]
Going Deeper with Convolutions [2]
Rethinking the Inception Architecture for Computer Vision [3]