Code here is borrowed from review object detection metrics
This repository consists of code to calculate coco style object detection metrics per image and mean for all images, a specific format is used for passing the ground truths and detections which is refered in usage section. This code outputs a csv file for each image and final row will contain averaged score for each metric.
The path to results will consist of two folders, groundtruths, detections and image folder(only png supported) to get image sizes. The groundtruths contain txt files in same format as yolov5 training and e detections contains txt files in the same format as received from detect.py script in yolov5.
- AP
- AP50
- AP75
- APsmall
- APmedium
- APlarge
- AR1
- AR10
- AR100
- ARsmall
- ARmedium
- ARlarge
The review object detection repository is good enough and is UI based but I could not find a way to get the object detection metrics for each image and also some api version of code which can be online with trainings/evaluations.
You can also refer to utils.py to get code to convert bounding box predictions to text files. The text file can also be used with the review object detection metrics pipeline for calculating pascal or coco scores.
python main.py -p path_to_results
This folder should contain two subfolders with names groundtruths, detections and images folder(only png supported) to read image size as the boxes are in relative format. These two folders should contain text file for each image.
For each image ground truth text file will be in format "[class] [center_x] [center_y] [width] [height]\n". Each value will be absolute and class can be given as a string.
For each image detection text file will be in format "[class] [confidence score] [center_x] [center_y] [width] [height]\n". Each value will be absolute and class can be given as a string.
The output of this code will be a csv file with 12 object detection metrics of coco for each image. The last row of this csv will be the mean score of these metrics for all the images. The first column will be the name of image. Sample csv file can be seen in example_result folder. -1 value is used for cases in which any specific metric value was nan(invalid). The averaged score for all images will also be printed as output of code.