Using diffirent activation function on NeuroSim
Opened this issue · 0 comments
bakigkgz1 commented
/* Truncation with a custom threshold */
double truncate(double x, int numLevel, double threshold) {
if (numLevel <= 0) { // No truncation if numLevel <= 0
return x;
} else {
int sign = 1;
if (x<0)
sign = -1; // For truncation on negative number
double val = x * numLevel * sign;
int r_val = (int)(val);
if (val - r_val >= threshold)
val = r_val + 1;
else
val = r_val;
return val*sign/numLevel;
}
}
formula.cpp
/* Activation function */
double tanh_fx(double x) {
return (exp(x) - exp(-x)) / (exp(x) + exp(-x));
}
train.cpp
// Backpropagation
/* Second layer (hidden layer to the output layer) */
for (int j = 0; j < param->nOutput; j++){
s2[j] = (1 - a2[j]*a2[j])*(Output[i][j] - a2[j]);
}
/* First layer (input layer to the hidden layer) */
std::fill_n(s1, param->nHide, 0);
#pragma omp parallel for
for (int j = 0; j < param->nHide; j++) {
for (int k = 0; k < param->nOutput; k++) {
s1[j] += (1 - a1[j]*a1[j]) * weight2[k][j] * s2[k];
}
}
Hello, I made the above changes to use the tanh activation function instead of the sigmoid activation function in the application, but the accuracy rate is very low, around ten percent. What could be the reason for this? I would appreciate your help, thank you, and have a good day