zhenyuw16/UniDetector

Can you release the perfoemance on 13 OdinW datasets like GLIP?

Opened this issue · 7 comments

Kegard commented

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

I have used the checkpoint "end-to-end-stage" released on other dataset. and my result is this:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.012
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=1000 ] = 0.025
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=1000 ] = 0.011
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.015
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=300 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=1000 ] = 0.513
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.513
OrderedDict([('bbox_mAP', 0.012), ('bbox_mAP_50', 0.025), ('bbox_mAP_75', 0.011), ('bbox_mAP_s', -1.0), ('bbox_mAP_m', 0.0), ('bbox_mAP_l', 0.015), ('bbox_mAP_copypaste', '0.012 0.025 0.011 -1.000 0.000 0.015')])

i want to know whether the code is erro or the result is really bad. Or can you release the code inference on OdinW datasets?

Excuse me, have you solved your problem?

I have changed my dataset and test again, then I get a normal result.

I have changed my dataset and test again, then I get a normal result.

Thanks your reply!!1. What is the meaning of changing the data set, modifying the validation set part?2、When you have -1 your loss is normally decreasing and converging, right?

  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model.
    if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it.
  1. I changed all dataset ,contains train set and val set.
  2. I just have made a zero-shot on other dataset, so I haven't train the model.
    if you can't solve the problem, I think you can change your dataset have a test. I remember the documents of mmdet have explain why -1 happend. you can read it.
    I see, Thank you! but I don't know where the mmdet's explain.I have checked the comments section on mmdet's github and searched in mmdet's manual

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

I cann't find the link, but I remebered that there is some problem with your dataset if mAP=-1

thanks your reply,I across change the iou_threshold=None in coco.py to slove this problem.(thank you again~)