In mlp.py, why are the parameters of the two models added together?
Valentin4869 opened this issue · 1 comments
Valentin4869 commented
In mlp.py at line 194, the paramters of the hidden layer and the output layer are added together. Is this a typo? If not, then why are we adding the parameters?
nouiz commented
This is not what you think it is. What we add are 2 lists of parameters. We
don't add the parameters togethers, we just create a liste that have all
the parameters together. We use that list of all the parameter later for
the gradient computation.
…On Thu, Feb 9, 2017 at 6:30 AM Valentin4869 ***@***.***> wrote:
In mlp.py
<https://github.com/lisa-lab/DeepLearningTutorials/blob/master/code/mlp.py>
at line 194, the paramters of the hidden layer and the output layer are
added together. Is this a typo? If not, then why are we adding the
parameters?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#183>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AALC-0cEs_Cmq5aMibggse_KbG3wSbGYks5ravjfgaJpZM4L8A6u>
.