change activation function but return same accuracy value!!!
Opened this issue · 4 comments
bakigkgz1 commented
leo-c-ling99 commented
Hi,
Disclaimer it's been a few year since I've looked at this code. It seems that the gradient for the activation is hardcoded within train.cpp. You might need to update their backpropagation code if you want to change the activation.
Line 507 in cc13863
bakigkgz1 commented
Thank you
Leo Ling ***@***.***>, 19 Ara 2023 Sal, 20:24 tarihinde şunu
yazdı:
… Hi,
Disclaimer it's been a few year since I've looked at this code. It seems
that the gradient for the activation is hardcoded within train.cpp. You
might need to update their backpropagation code if you want to change the
activation.
https://github.com/neurosim/MLP_NeuroSim_V3.0/blob/cc1386372d01fc022a9bf52cabd8c96e94fb838b/Train.cpp#L507
—
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AI6Q7AWOPYXIRHU4JZRCSQ3YKHEVNAVCNFSM6AAAAABAYNB7HWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGE4TKMZUGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***
bakigkgz1 commented
Hello, first of all, thank you for your reply. I changed the part you said as follows, but after a certain iteration, it gets stuck at 10.35%. (I want to use tanh as activation function)
// Backpropagation
/* Second layer (hidden layer to the output layer) */
for (int j = 0; j < param->nOutput; j++){
s2[j] = -2*a2[j] * (1 - a2[j]*a2[j])*(Output[i][j] - a2[j]);
}
/* First layer (input layer to the hidden layer) */
std::fill_n(s1, param->nHide, 0);
#pragma omp parallel for
for (int j = 0; j < param->nHide; j++) {
for (int k = 0; k < param->nOutput; k++) {
s1[j] += a1[j] * (1 - a1[j]*a1[j]) * weight2[k][j] * s2[k];
}
}
bakigkgz1 commented
Hello again, I apologize for the disturbance. In this application, we want
to use different activation functions and for this purpose, we are looking
to hire someone to do this job for a certain fee. Can you or someone else
help with this?
Thank you and best regards.
Baki gökgöz ***@***.***>, 19 Ara 2023 Sal, 23:09 tarihinde şunu
yazdı:
… Thank you
Leo Ling ***@***.***>, 19 Ara 2023 Sal, 20:24 tarihinde
şunu yazdı:
> Hi,
>
> Disclaimer it's been a few year since I've looked at this code. It seems
> that the gradient for the activation is hardcoded within train.cpp. You
> might need to update their backpropagation code if you want to change the
> activation.
>
>
> https://github.com/neurosim/MLP_NeuroSim_V3.0/blob/cc1386372d01fc022a9bf52cabd8c96e94fb838b/Train.cpp#L507
>
> —
> Reply to this email directly, view it on GitHub
> <#11 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AI6Q7AWOPYXIRHU4JZRCSQ3YKHEVNAVCNFSM6AAAAABAYNB7HWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRTGE4TKMZUGE>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***
--
Ögr.Gör.Baki GÖKGÖZ
Gümüşhane Üniversitesi
www.bakigokgoz.com
***@***.***