rasmusbergpalm/DeepLearnToolbox

ReLU for CNN?

mrgloom opened this issue · 1 comments

Is it possible to replace sigm activation function to relu in CNN? I tried to replace sigm to relu in cnnff.m but it doesn't work.

function X = relu(P)
    X = max(0,P);
end


I guess this also requires changes to the backprop derivatives?

There is pull request #131 for adding ReLU to regular feed-forward network (nnff() and nnbp()). Maybe you can borrow some ideas from there. Certainly you need to add ReLU support to backprop as well.

My experience with DeepLearnToolbox CNN code is, that it is unbearably slow and rather limited. For example it doesn't support fully-connected layers at all. You may have better luck with MatConvNet, which seems to be quite full-featured, but admittedly more complex.