pytorch/tnt

Is APMeter working correct way?

BeardedWhale opened this issue · 0 comments

From doc:

The APMeter measures the average precision per class.

Consider example:

output = torch.tensor([[0.1000, 0.9000],
        [0.1000, 0.9000],
        [0.1000, 0.9000],
        [0.1000, 0.9000]])


target = torch.tensor([[1., 0.],
        [0., 1.],
        [1., 0.],
        [0., 1.]])

From my understanding of what was written in doc, I should have:

accuracies:
class 0 0
class 1 100%

precision:
class 0 0
class 1 50%

So, I run the following code expecting to get [0, 0.5] as output:

>> class_meter = torchnet.meter.APMeter()
>> class_error.add(output, target)
>> class_error.value()
tensor([0.7095, 0.5000])

What these numbers mean???

So, I try to understand what are these numbers run the sklearn classification report

>> from sklearn.metrics import classification_report
>> print(classification_report(torch.argmax(target, dim=1), torch.argmax(output, dim=1)))

             precision    recall  f1-score   support
           0       0.00      0.00      0.00         2
           1       0.50      1.00      0.67         2
    accuracy                           0.50         4
   macro avg       0.25      0.50      0.33         4
weighted avg       0.25      0.50      0.33         4

No such numbers, wonder what this function is doing, definitely not:

the average precision per class.

Am I missing something? 👀