hujie-frank/SENet

top-1 accuracy is very low

408550969 opened this issue · 5 comments

Hi
I test the SE-BN-Inception and use the pre-trained caffemodel, but the accuracy is 0.00018 on Imagenet2012.
May I ask why this could happen, is it a problem with my dataset?
Thanks.

Please refer to #5.

Thanks!

Hi,I am training the SE-BN-Inception, but I find the acuuracy is only 68% now.(accurate learning rate is 0.0001)
My learning rate is set wrong,I set it to 0.1 from the beginning.
And I only use mirror true.Others are same.
I am set the use_global_status to false in train.prototxt,and set it to true in test.prototxt.
Do you know why the accuracy is so low?How to implement the augmentation list in the form?Thank you!

@408550969 The data augmentation don't bring significant improvement. It must be something wrong in your experiments. Additionally, you can refer here to find the implementation of the augmentation in list.

This is the top and bottom of the train.prototxt.Can you help me see what's wrong?Thanks!
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 224
mean_value: 104.0
mean_value: 117.0
mean_value: 123.0
}
data_param {
source: "/media/cll/Seagate/ilsvrc12_train_lmdb"
batch_size: 32
backend: LMDB
}
}

layer {
name: "classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
top: "classifier"
inner_product_param {
num_output: 1000
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "classifier"
bottom: "label"
top: "loss"
}