pytorch/tnt

Confusion Meter with batch of 1.

CielAl opened this issue · 0 comments

Hi,
it appears that the logic of https://github.com/pytorch/tnt/blob/master/torchnet/meter/confusionmeter.py#L44
implies if the prediction is 1-d, it is always considered as a row of prediction for different input data points, rather than for different categories of a single data point.

This behavior is somehow slightly different to the description in the documentation (https://tnt.readthedocs.io/en/latest/source/torchnet.meter.html#confusionmeter)
and requires user to either:
(1) Add leading singleton dimension manually for batch of 1
(2) Use "drop last" option of the dataloader manually.

I wonder perhaps it is possible to further examine when pred is 1-d, target.shape[0] is exactly 1 and the length of pred is exactly k, so that even when predicted.shape[0] == target.shape[0] fails, it may still cover the batch-of-1 case?

(Actually it may be more about the consistency between different meters. So far, the behavior against batch-of-1 for meters like AverageValueMeter and ClassErrorMeter appears to be different to the ConfusionMeter)