dddzg/up-detr

run the code on coco, but can not get the same results shown in the paper

rock4you opened this issue ยท 14 comments

dddzg commented

May I ask you for the detail config of training coco (number of gpus and etc)?
There is a log in our experiments:https://drive.google.com/file/d/1DQqveOZnMc2VaBhMzl9VilMxdeniiWXo/view?usp=sharing
You can compare your log with it.

GPU using 8 cards of V100 , and the commands are the same as your provided in the github.
Is there anything has to be modified before running the train program?

dddzg commented

I check it again. There is a mistake of my script. I am so sorry. The lr_backbone should set to 5e-5 instead of 5e-4. I will update the README. Thanks a lot! I will keep the issue open until you get the right result.

dddzg commented

Hi @rock4you , may I ask for some new progress?

Still running, it looks good this time.
training

The AP of coco val2017 with 300 epochs in Table 2 of the paper is 42.8,
is this result get from a certain training process or the mean value of several times ?

dddzg commented

As coco dataset is large, the result is reported at the last training epoch without serveral times (I guess the result variance is small on coco). BTW, may I ask for your result?

Still running, the AP around epoch 240 is 0.430.
The training speed is about 40 epochs / day

dddzg commented

Glad to hear the result. As far as I observe, the open-source pre-trained model is a little better than paper report.

๐Ÿ‘๐Ÿป ๐Ÿ‘๐Ÿป

0.435

updetr

dddzg commented

Nice to hear the result. Could you attach more detailed COCO style evaluation result (such like https://gist.github.com/dddzg/cd0957c5643f5656f6cdc979da4d6db1)?

The last epoch:

IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.432
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.632
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.458
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.209
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.475
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.622
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.342
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.550
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.589
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.311
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.650
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.816
Training time 7 days, 12:19:43


The highest at epoch 288:

IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.435
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.633
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.464
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.211
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.477
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.626
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.342
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.550
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.589
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.315
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.653
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.818

I check it again. There is a mistake of my script. I am so sorry. The lr_backbone should set to 5e-5 instead of 5e-4. I will update the README. Thanks a lot! I will keep the issue open until you get the right result.

Hi you mean in the finetune stage or pretrain stage ? Why in the pretrain stage the backbone should freeze ?