RPN网络的class输出没有经过softmax直接进行计算交叉熵损失了?
dishofchicken opened this issue · 1 comments
dishofchicken commented
你好,我是个刚学习的小白,读了一下你的代码,帮助很大,但下面对于RPN网络这边有点不明白。
`
def rpn(base_layers, num_anchors):
x = Conv2D(512, (3, 3), padding='same', activation='relu', kernel_initializer='normal', name='rpn_conv')(
base_layers)
x_class = Conv2D(num_anchors * 2, (1, 1), kernel_initializer='uniform', activation='linear',
name='rpn_class_logits')(x) # 这里怎么是线性激活函数?
x_class = Reshape((-1, 2))(x_class)
x_regr = Conv2D(num_anchors * 4, (1, 1),
kernel_initializer='normal', name='rpn_deltas')(x) # 是不是和这里反了
x_regr = Reshape((-1, 4))(x_regr)
return x_regr, x_class
# 定义rpn损失layer,然后这里把class_logits直接传进去计算交叉熵了,没看见经过了其他处理,这样可以的吗
cls_loss_rpn = Lambda(lambda x: rpn_cls_loss(*x), name='rpn_class_loss')(
[class_logits, rpn_cls_ids, anchor_indices])
`
谢谢:)
dishofchicken commented
不好意思,重新看了下tf.nn.softmax_cross_entropy_with_logits和categorical_crossentropy和区别,现在理解了:)