USTC-Video-Understanding/I3D_Finetune

fc_out, relu, top_k_op

Opened this issue · 2 comments

https://github.com/USTC-Video-Understanding/I3D_Finetune/blob/master/Demo_Transfer_rgb.py#L139
Hi,
I think the last fc layer may should not use an activation unit.
If the output of fc layer is all negative, after the ReLU unit, it would be all zero.
In this case, top_k_op = tf.nn.in_top_k(fc_out, label_holder, 1) will always return True.

I think you are right.
When I use this code to train , the result will be all the same sometimes.
You can just remove parameter of activation.

vra commented

Hi @WuJunhui ,
Thanks for your helpful advice, we have fix this issue in latest version of this repo. Please run git pull to download the newest code.