[BUG]about auto-assign losses
luweishuang opened this issue · 2 comments
I ran auto-assign with COCO 2017 dataset and I got this loss results:
iter: 1/720000 total_loss: 2.400 loss_pos: 2.308 loss_neg: 0.001 loss_norm: 0.091
iter: 20/720000 total_loss: 2.288 loss_pos: 2.187 loss_neg: 0.002 loss_norm: 0.079
iter: 220/720000 total_loss: 1.770 loss_pos: 1.640 loss_neg: 0.039 loss_norm: 0.090
iter: 19200/720000 total_loss: 0.722 loss_pos: 0.555 loss_neg: 0.071 loss_norm: 0.072
iter: 55660/720000 total_loss: 0.742 loss_pos: 0.602 loss_neg: 0.066 loss_norm: 0.066
iter: 93360/720000 total_loss: 0.563 loss_pos: 0.442 loss_neg: 0.060 loss_norm: 0.064
iter: 129080/720000 total_loss: 0.614 loss_pos: 0.429 loss_neg: 0.072 loss_norm: 0.084
As you can see, loss_pos is significantly larger than the remaining two losses and loss_neg and loss_norm don't have a clear downward trend. I also trained auto-assign on other datasets and got similar loss results. Of course the auto-assign detection model is work and can get properable results. But can I think the loss really works is loss_pos and the remaining two are almost useless?
By looking at the source code, I found that it is very likely that loss_norm is invalid and causes the above results. The loss_norm comes from the Gaussian shape constraint. Is this the cause of the above loss behavior? Are there any suggestions or methods for improvement?
please put your issues under https://github.com/Megvii-BaseDetection/AutoAssign, not here.