pytorch/tnt

ROC-AUC doesn't reproduce the same value as sklearn.metrics.roc_auc_score

ahmedanis03 opened this issue · 1 comments

here is a sample code to reproduce:

    y_pred = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0],
                   [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
                   [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
                   [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0],
                   [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]])

    y_true = np.array([[0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0],
                       [1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
                       [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1],
                       [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
                       [0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]])
    y_pred = y_pred.reshape([-1])
    y_true = y_true.reshape([-1])
    aucmeter = AUCMeter()
    aucmeter.add(y_pred, y_true)
    print(aucmeter.value()[0], sklearn.metrics.roc_auc_score(y_true,y_pred))

the data sample is not the best to measure an AUC for a ROC but is should reproduce the same result

perhaps we should just switch to sklearn and make it a dependency?