wutong16/DistributionBalancedLoss

Lower mAP when training on VOC

Closed this issue · 2 comments

Hi, thank you for providing your source code of the paper,
Actually, I run you training code of VOC dataset for 3 times, but always get the Total mAP at around 77.7mAP. I notice in your paper "Table 1", the mAP of your "DB-focal" is 78.94. I was wondering if I did something wrong and why this is the case ?

Below is the log-out information of your training code.
Split: head mAP:0.7344 acc:0.8429 micro:0.6096 macro:0.5392
Split:middle mAP:0.8359 acc:0.9375 micro:0.6400 macro:0.6527
Split: tail mAP:0.7651 acc:0.9400 micro:0.5492 macro:0.5305
Split: Total mAP:0.7771 acc:0.9101 micro:0.6024 macro:0.5698

Thanks in advance.

Hi @Kevin655 !

Thanks for your attention to our work.
The issue is probably due to the sensitivity to the randomness because of the small data scale.
Please try another random seed, e.g. --seed 1, which produces a total mAP of around 79.0 here on my server.
And we would also consider improving the robustness of our method to randomness.

Got it! Thank you for providing the reproducing method!