train.prototxt
foralliance opened this issue · 1 comments
foralliance commented
@taokong HI
关于train.prototxt,有如下一个问题
layer {
name: "rpn_lrn7"
type: "BatchNorm"
bottom: "rpn_conv7"
top: "rpn_lrn7"
}
对于BatchNoram层,看 网上/其他paper中 很多都是如下设置的:
layer {
bottom: "res5c_branch2b"
top: "res5c_branch2b"
name: "bn5c_branch2b"
type: "BatchNorm"
batch_norm_param {
use_global_stats: true
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
param {
lr_mult: 0.0
decay_mult: 0.0
}
}
1) 为什么你这里没有这3个默认的param?
2) use_global_stats在训练时不是为false么,为什么看很多都是true?
为了描述准确,这里用了中文,麻烦了!
taokong commented
This layer is trained from scratch, so we have no fixed params. In many other papers/code, they use params trained from the Imagenet models