/mean_average_precision

Small and simple python/numpy utility to compute mean average precision (mAP) on detection task.

Primary LanguagePythonMIT LicenseMIT

Detection mAP

A simple utility tool to evaluate Bounding box classification task following Pascal VOC paper.

To learn about this metric I recommend this excellent blog post by Sancho McCann before reading the paper : link

Note that the method is not compared with the original VOC implementation! (See Todo)

features

  • Simple : numpy and matplotlib are the only dependencies
  • Compute a running evaluation : input prediction/ground truth at each frames, no need to save in files
  • Plot (matplotlib) per class pr-curves with interpolated average precision (default) or average precision

Method

Multiclass mAP

Handle every class as one against the others. (x against z)

  • True positive (TP):
    • Gt x predicted as x
  • False positive (FP):
    • Prediction x if Gt x has already a TP prediction
    • Prediction x not overlapping any Gt x
  • False negative (FN):
    • Gt x not predicted as x

Example frame

example

Code

All you need is your predicted bounding boxes with class and confidence score and the ground truth bounding boxes with their classes.

  frames = [(pred_bb1, pred_cls1, pred_conf1, gt_bb1, gt_cls1),
            (pred_bb2, pred_cls2, pred_conf2, gt_bb2, gt_cls2),
            (pred_bb3, pred_cls3, pred_conf3, gt_bb3, gt_cls3)]
  n_class = 7

  mAP = DetectionMAP(n_class)
  for frame in frames:
      mAP.evaluate(*frame)

  mAP.plot()
  plt.show() # or plt.savefig(path)

In this example a frame is a tuple containing:

  • Predicted bounding boxes : numpy array [n, 4]
  • Predicted classes: numpy array [n]
  • Predicted confidences: numpy array [n]
  • Ground truth bounding boxes:numpy array [m, 4]
  • Ground truth classes: numpy array [m]

Note that the bounding boxes are represented as two corners points : [x1, y1, x2, y2]

example

TODO

Contribution

And of course any bugfixes/contribution are always welcome!