MeanAveragePrecision - bug in `max_detection_thresholds`
nisyad-ms opened this issue ยท 3 comments
๐ Bug
MeanAveragePrecision
return map=-1 for any value other than 100 for max_detection_thresholds
To Reproduce
from torch import tensor
from torchmetrics.detection import MeanAveragePrecision
preds = [dict(boxes=tensor([[0, 0, 100, 100],
[0, 0, 50, 50]]),
scores=tensor([1.0, 0.9]),
labels=tensor([0, 1]))]
target = [dict(boxes=tensor([[0, 0, 100, 100],
[0, 0, 50, 50]]),
labels=tensor([0, 1]),)]
metric = MeanAveragePrecision(iou_type="bbox", max_detection_thresholds=[1, 10, 50])
metric.update(preds, target)
result = metric.compute()
result # map = -1
Expected behavior
Expected: map=1 (for 50 max detections)
Environment
- TorchMetrics version (and how you installed TM, e.g.
conda
,pip
, build from source): 1.2.1 - Python & PyTorch Version (e.g., 1.0): 3.10
- Any other relevant information such as OS (e.g., Linux): Ubuntu 22.04
Hi @nisyad-ms, thanks for reporting this issue.
Sadly this is due to a known bug in the official pycocotools backend, that we use for computations. Specifically, this line:
https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/cocoeval.py#L460
should have been
stats[0] = _summarize(1, maxDets=self.params.maxDets[2])
for it to work. Sadly the repo is not really maintained anymore, but is still considered the official reference for mAP.
Instead you can install the faster-coco-eval
backend (https://github.com/MiXaiLL76/faster_coco_eval) which we also supports. This backend have implemented the fix so your code calculates the correct value.
metric = MeanAveragePrecision(iou_type="bbox", max_detection_thresholds=[1, 10, 50], backend="faster_coco_eval")
metric.update(preds, target)
result = metric.compute()
print(result)
#{'map': tensor(1.), ...
Closing issue because we really cannot fix this on our side.
Thanks @SkafteNicki for the information. How to ensure the faster-coco-eval backend is used? Just installing it will do? Thanks again.
Thanks @SkafteNicki for the information. How to ensure the faster-coco-eval backend is used? Just installing it will do? Thanks again.
Sorry I should have specified that. You install the backend with pip install faster-coco-eval
and then when initializing the MeanAveragePrecision
class you need to set the backend
argument to "faster_coco_eval"
.