Compute TDR at low FAR
Opened this issue · 3 comments
Dear authors,
Thank you very much for providing the code. I found your research very interesting. I would like to compute and hopefully reproduce the results of Table 3 in your paper. Precisely, how can I compute the "true detection rate (TDR) at a low false alarm rate (FAR) of 0.5%"? If you could provide any code would be really appreciated.
I really appreciate any help you can provide.
Hi,
Thank you for your interest in our work. I'm really sorry for not replying to you earlier. Currently, I'm in the process of finding this particular evaluation code. I'll get back to you with the code. If I'm not able to find the code, I'll write a new code and revert back to you. In the meantime, you can get some information at https://anssi-fr.github.io/SecuML/miscellaneous.detection_perf.html . I'll try to get back to you asap. Thank you!
Hi, below is the code for TDR at low FAR:
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import os
from sklearn.metrics import roc_curve, auc
from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap
from prettytable import PrettyTable
from pathlib import Path
import warnings
warnings.filterwarnings("ignore")
import csv
def write_result(result_files, save_path, dataset_name, label):
methods = []
scores = []
for file in result_files:
methods.append(Path(file).parent.stem)
scores.append(np.load(file))
methods = np.array(methods)
scores = dict(zip(methods, scores))
colours = dict(
zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2')))
#x_labels = [1/(10**x) for x in np.linspace(6, 0, 6)]
x_labels = [10**-6, 10**-5, 10**-4, 10**-3, 10**-2, 10**-1]
tpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels])
print(tpr_fpr_table)
fig = plt.figure()
for method in methods:
fpr, tpr, _ = roc_curve(label, scores[method])
roc_auc = auc(fpr, tpr)
fpr = np.flipud(fpr)
tpr = np.flipud(tpr) # select largest tpr at same fpr
plt.plot(fpr,
tpr,
color=colours[method],
lw=1,
label=('[%s (AUC = %0.4f %%)]' %
(method.split('-')[-1], roc_auc * 100)))
tpr_fpr_row = []
tpr_fpr_row.append("%s-%s" % (method, dataset_name))
for fpr_iter in np.arange(len(x_labels)):
_, min_index = min(
list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))
#tpr_fpr_row.append('%.4f' % tpr[min_index])
tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))
tpr_fpr_table.add_row(tpr_fpr_row)
plt.xlim([10**-6, 0.1])
plt.ylim([0.3, 1.0])
plt.grid(linestyle='--', linewidth=1)
plt.xticks(x_labels)
plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True))
plt.xscale('log')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC on IJB')
plt.legend(loc="lower right")
#plt.show()
fig.savefig(os.path.join(save_path, 'verification_auc.pdf'))
print(tpr_fpr_table)
# write to csv
result = [tuple(filter(None, map(str.strip, splitline))) for line in str(tpr_fpr_table).splitlines()
for splitline in [line.split("|")] if len(splitline) > 1]
with open(os.path.join(save_path, 'verification_result.csv'), 'w') as outcsv:
writer = csv.writer(outcsv)
writer.writerows(result)
if __name__ == '__main__':
label = y_true[0:400]
scores = y_pred[0:400]
np.save("scores.npy", scores)
score_save_file = "./scores.npy"
result_files = [score_save_file]
save_path = './'
dataset_name = 'x'
write_result(result_files, save_path, dataset_name, label)
Hi @vishal3477! Thanks a lot for your quick response; I really appreciate your help. I will check the code and let you know if I have any further questions.