open-mmlab/mmdetection

Box Predictions - Confidence Thresholding during Evaluation

laurenzheidrich opened this issue · 2 comments

I am trying to compute evaluation metrics on my finetuned MM-Grounding-Dino Model.

I am doing this by running

python tools/test.py $config_file $weight_file --work-dir ts_result --out ts_result/inference.pkl

This works fine and provides me with certain AP / AR results

image

When inspecting the .pkl file, though, I can see many instance predictions with decreasing confidence scores:

image

When I visualise the predicted boxes using test.py, though, I see that only bounding boxes with a confidence > 0.3 are drawn into the image:

image

So my question is, for the calculated AP / AR results, how & where can the confidence threshold be set? Maybe I only want to keep predictions with a confidence of 0.5, but I have no idea, where I could change that anywhere. Same goes for the visualizations: Where can I set, which bounding boxes are drawn?

I would gladly appreciate some help

Hey, whats your answer to it? Especially: how & where can the confidence threshold be set?