Wrong Results using tpr, tnr, fpr, fnr
Closed this issue · 6 comments
Hi,
I using the metrics (fpr, fnr, tpr, tnr) for my model in Keras (Tensorflow.python keras v2.1.2) as you can see:
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['binary_accuracy', tpr, tnr, fpr, fnr])
But running this with history = model.fit_generator(..)
am getting the accuracy on tpr and fpr; and 1-acc for tnr and fnr. For debugging I returned also the outputs of the contingency_table
and tp=tn = all true predicitons and fp=fn = all false predicitons.
Do you have any suggestions how to fix this issue and get the right values? Thanks in advance!
The problem with Keras metrics is that they are computed for each batch and averaged on the fly. Consider a metric computed as A/B. If the denominator B is the same for all the batches (which is the case for accuracy), then the average across batches is the same as the metric computed across the whole dataset. Hower, if B varies across batches, then the average of batch metrics is not the same as the global metric.
Can you compute your metrics using
from concise.eval_metrics import *
y_pred = model.predict(x)
tpr(y_true, y_pred)
....
fpr(y_true, y_pred)
and print those together with the metrics you obtain from fig_generator
?
Thanks a lot for the fast reply.
"The problem with Keras metrics is that they are computed for each batch and averaged on the fly. Consider a metric computed as A/B. If the denominator B is the same for all the batches (which is the case for accuracy), then the average across batches is the same as the metric computed across the whole dataset. Hower, if B varies across batches, then the average of batch metrics is not the same as the global metric." - total agree. But my goal is to have a metrics with the contingency table.
I could not compute the metrics in model.compile()
using the code from concise.eval_metrics.
For model.compile
you should use the metrics from concise.metrics
and for the evaluation of the prediction you should use metrics from concise.eval_metrics
. The reason is that the former have to be implemented with Keras functions.
Thanks a lot. I used these functions in 'model.compile'. But the issue is, that the tpr and tnr are similar as you can see below. For testing I used just a small data set, but the values should distinguish more.
Epoch 1/3
62/62 [==============================] - 108s 2s/step - loss: 0.7520 - binary_accuracy: 0.6280 - tpr: 0.6280 - tnr: 0.6280 - val_loss: 2.5995 - val_binary_accuracy: 0.4271 - val_tpr: 0.4271 - val_tnr: 0.4271
Epoch 2/3
62/62 [==============================] - 28s 459ms/step - loss: 0.6160 - binary_accuracy: 0.6845 - tpr: 0.6845 - tnr: 0.6845 - val_loss: 0.9102 - val_binary_accuracy: 0.4870 - val_tpr: 0.4870 - val_tnr: 0.4870
Epoch 3/3
62/62 [==============================] - 27s 439ms/step - loss: 0.5802 - binary_accuracy: 0.7016 - tpr: 0.7016 - tnr: 0.7016 - val_loss: 1.6536 - val_binary_accuracy: 0.4818 - val_tpr: 0.4818 - val_tnr: 0.4818
Solved the issue:
I set my data set to a categorical class mode with to classes, so the values were one hot encoded. Applying K.argmax() y and y_pred is solving the issue.
Good-day, Hope you are well, @tommysft I would like to ask how you accessed the tpr and tnr in the model.compile()
line of code. I also have a binary classification problem, however I am having a hard time getting the tpr and tnr.
I used the code model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ["accuracy", "tpr", "tnr"])
Please indicate what I am doing wrong.