astorfi/lip-reading-deeplearning

redundant activation functions in lipread_mouth.py

jurastm opened this issue · 1 comments

hi, thank you for such a nice repo.

I noticed that in your code you are used slim library and custom PReLU activation after every conv layer.
the problem is that after slim.conv2d (that actually performs 3d convolutions in this case) tensor already passed through activation, because default parameter 'activation_fn' is relu.
So, your PReLU alphas don't learn, because instead of negative values you get all zeros.
In order to fix that: net = slim.conv2d(inputs,...., activation_fn=None)

@jurastm Thank you so much for the important point. Yes, I believe that is an issue in the last version available on GitHub. It used to be the correct case in previous versions.

Can you please propose these changes via a pull request?
I greatly appreciate it.