fc_out, relu, top_k_op
Opened this issue · 2 comments
WuJunhui commented
https://github.com/USTC-Video-Understanding/I3D_Finetune/blob/master/Demo_Transfer_rgb.py#L139
Hi,
I think the last fc layer may should not use an activation unit.
If the output of fc layer is all negative, after the ReLU unit, it would be all zero.
In this case, top_k_op = tf.nn.in_top_k(fc_out, label_holder, 1) will always return True.
Rhythmblue commented
I think you are right.
When I use this code to train , the result will be all the same sometimes.
You can just remove parameter of activation.