lood339/SCCvSD

Wrong IOU calculation

jiangwei221 opened this issue · 6 comments

Hi:
I just tested it with all test image of UOT dataset, but found the mean IOU_part is much worse than the number in the paper.
The mean IOU_part is 94.5 in the paper, and my test result is 69.8. Since many images got 0.0 IOU, which lowered the mean value.
I think this is because the released version contains a smaller database.
If it's possible, would you mind release the full dataset, (maybe google drive)?

Bests

Update:

Thanks you for the PR. This problem is confirmed.
For example, the program gives 0 IoU when query_index = 7. I visualize the query result which is close to the ground truth, but the program output 0 IoU. Working on that ...

The HoG feature is not pre-computed and saved as it is very large. The code is at ./python/hog. The testing process is similar to deep feature.

Tested output

In [9]: (iou_optim_list<0.01).sum()                                                                              
Out[9]: 39

In [10]: iou_optim_list.mean()                                                                                   
Out[10]: 0.6979822217526244

In [11]: np.median(iou_optim_list)                                                                               
Out[11]: 0.9302689543887088

Do you use deep feature or HoG feature? If you use deep feature, the IOU_part_mean number should not be so low. I do not run the new script on whole dataset. So, if possible, please pull your test script so that I can have a look what happens. Thank you.

I used the "deep" feature. Btw, it seems there is no HoG featre file in the repo? cannot find it under https://github.com/lood339/SCCvSD/tree/master/data/features
Just created a PR #3 .
Thanks!

Thanks you for the PR. This problem is confirmed.
For example, the program gives 0 IoU when query_index = 7. I visualize the query result which is close to the ground truth, but the program output 0 IoU. Working on that ...

The HoG feature is not pre-computed and saved as it is very large. The code is at ./python/hog. The testing process is similar to deep feature.

Hi, the bug is fixed.
The result is:
---------- Direct retrival performance ----------
mean IoU for retrived homogrpahy 0.9102323238319108
median IoU for retrived homogrpahy 0.9211387422021994
numer of failed cases for retrived homogrpahy 0

---------- Refined retrival performance ----------
mean IoU for refined homogrpahy 0.9479986392214041
median IoU for refined homogrpahy 0.9642280376096697
numer of failed cases for refined homogrpahy 0

Slightly better than the one in the paper.

Close the issue as the bug is fixed.