redundant activation functions in lipread_mouth.py
jurastm opened this issue · 1 comments
jurastm commented
hi, thank you for such a nice repo.
I noticed that in your code you are used slim library and custom PReLU activation after every conv layer.
the problem is that after slim.conv2d (that actually performs 3d convolutions in this case) tensor already passed through activation, because default parameter 'activation_fn' is relu.
So, your PReLU alphas don't learn, because instead of negative values you get all zeros.
In order to fix that: net = slim.conv2d(inputs,...., activation_fn=None)