Simple Python script to compute equal error rate (EER) for machine learning model evaluation.
Reference: https://stackoverflow.com/questions/28339746/equal-error-rate-in-python
Code:
import numpy as np
import sklearn.metrics
"""
Python compute equal error rate (eer)
ONLY tested on binary classification
:param label: ground-truth label, should be a 1-d list or np.array, each element represents the ground-truth label of one sample
:param pred: model prediction, should be a 1-d list or np.array, each element represents the model prediction of one sample
:param positive_label: the class that is viewed as positive class when computing EER
:return: equal error rate (EER)
"""
def compute_eer(label, pred, positive_label=1):
# all fpr, tpr, fnr, fnr, threshold are lists (in the format of np.array)
fpr, tpr, threshold = sklearn.metrics.roc_curve(label, pred, positive_label)
fnr = 1 - tpr
# the threshold of fnr == fpr
eer_threshold = threshold[np.nanargmin(np.absolute((fnr - fpr)))]
# theoretically eer from fpr and eer from fnr should be identical but they can be slightly differ in reality
eer_1 = fpr[np.nanargmin(np.absolute((fnr - fpr)))]
eer_2 = fnr[np.nanargmin(np.absolute((fnr - fpr)))]
# return the mean of eer from fpr and from fnr
eer = (eer_1 + eer_2) / 2
return eer
Sample usage:
from compute_eer import compute_eer
label = [1, 1, 0, 0, 1]
prediction = [0.3, 0.1, 0.4, 0.8, 0.9]
eer = compute_eer(label, prediction)
print('The equal error rate is {:.3f}'.format(eer))