ultralytics/yolov5

πŸŒŸπŸ’Ά Competition: pycocotools mAP Alignment

glenn-jocher opened this issue Β· 4 comments

πŸŒŸπŸ’Ά Ultralytics Competition: pycocotools mAP Alignment

I'm super excited to announce our very first Ultralytics AI Competition πŸ˜ƒ!! This is the first in a series of challenges that we are facing at Ultralytics that we want to share with the community to allow everyone to help with. Competition participants can contribute open-source solutions to receive prize money and to improve the YOLOv5 experience for everyone.

Problem πŸ€”

Our local mAP results do not align as well as we'd like with pycocotools mAP results. They are similar to within about 1%, but we want to improve this further, to give better confidence in the metrics we produce on custom datasets where pycocotools-format JSON labels are unavailable. Below is an example of the problem from our Colab notebook, showing YOLOv5x COCO local 49.6 mAP@0.5:0.95 vs pycocotools 50.7 mAP@0.5:0.95.

mAP Values

YOLOv5 mAP is computed in val.py with functions from utils/metrics.py:

yolov5/val.py

Lines 237 to 243 in f17c86b

# Compute metrics
stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
if len(stats) and stats[0].any():
tp, fp, p, r, f1, ap, ap_class = ap_per_class(*stats, plot=plots, save_dir=save_dir, names=names)
ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class

Solution πŸ’‘

The winning solution must meet all of the following criteria:

  1. Local mAP@0.5 and mAP@0.5:0.95 must match pycocotools mAP@0.5 and mAP@0.5:0.95 in all test cases to < 0.1 difference. For example 50.0 vs 50.1 qualifies but not 49.9 vs 50.1.
  2. test.py execution time must remain within 10% of current time for all test cases. If test.py currently runs in 60 seconds, a winning submission must run in <66 seconds. This will prevent a simple import of the pycocotools code, which itself is very slow, and is the reason we have not adopted it fully.
  3. Solutions must be submitted in the form of a Pull Request to the ultralytics/yolov5 repository and pass all automatic CI checks.
  4. Test results below must accompany PRs (winning submissions will be independently verified by Ultralytics).

Test cases are below (8 different scenarios). This code can be copied and pasted into a Colab notebook, and the cell output should accompany competition submissions.

# Run pycocotools mAP Alignment Competition tests
for weights in 'yolov5s.pt', 'yolov5x.pt':
  for img in 320, 640:
    for iou in 0.45, 0.65:
      !python test.py --weights {weights} --data coco.yaml --img {img} --iou {iou}

πŸ’Ά Prize €1000.00

The first submission that meets all of the Solution requirements will claim the full prize funds of €1000.00 (1000 EUR) from Ultralytics. Funds will be converted to participant's local currency using the exchange rate on date of winning submission.

πŸ“… Deadline

The deadline for submissions is March 31st, 2022. After this date the competition will be closed. If a winning submission is received before the deadline then the competition may be closed early.

If multiple submissions meet the Solution requirements the earliest dated submission will claim the full prize. If a PR is composed of multiple commits then the submission timestamp will be the time of the earliest commit in a PR that meets all of the Solution requirements.

βœ… Participation Requirements

This competition is open to any individual or organization from any country πŸ‡ΊπŸ‡³. There is no restrictions on citizenship, age, gender or location. To receive prize funds the winning participant must meet the eligibility requirements of our funds transfer provider and provide name, address, phone number, email and bank info.

Hello,

We can not reproduce coco api's mAP by provided *.txt label files.
Coco api will ignore false positive bboxes which matched ignored criteria.
For convenient to participant, I suggest you to provide additional *.txt for bboxes with iscrowd=1.

By the way, slow speed of coco api is mainly due to it treat ap50, ap55, ..., ap95 as independent cases.
While in this repo, the match function is decided by iou50 only.

hiyyg commented

FYI, there exists some faster solutions which match pycocotools extactly: https://github.com/facebookresearch/detectron2/blob/master/detectron2/evaluation/fast_eval_api.py.

Is the script you use for producing the coco evaluation available?

@joangog yes it's here:

yolov5/val.py

Lines 278 to 291 in d885799

try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
check_requirements(['pycocotools'])
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
anno = COCO(anno_json) # init annotations api
pred = anno.loadRes(pred_json) # init predictions api
eval = COCOeval(anno, pred, 'bbox')
if is_coco:
eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate
eval.evaluate()
eval.accumulate()
eval.summarize()
map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)