Mean IoU values not matching
Closed this issue · 8 comments
When I run the evaluation code, I get a mean IoU of 47% as against 68.8% as mentioned. This was done using pretrained model using 1/8 training data. Is there something that I have done wrong or some changes that have to be made to the code before running evaluation? The dataset was VOC2012.
Hi,
Could you share what version of pytorch you are using? (We tested on pytorch 0.3/0.4) What about other pretrained models? Do they also generate low-performance results?
Thank you for the reply. My Pytorch version is 0.4.1.
For advFull weights, the IoU is around 51% and for 0.5 training data weights, IoU displayed is 97% which definitely has got to be an error. Can you please check if you are getting any different results by running evaluation. If yes, please do update the training weights.
Hi,
on my side all the numbers are normal. Please consider using earlier pytorch version for now. I probably won't have time to address the version compatibility issue before Dec.
I have downgraded pytorch to version 0.3.1. Still the results are coming similar. The output I get is:
0 processd
100 processd
200 processd
300 processd
400 processd
500 processd
600 processd
700 processd
800 processd
900 processd
1000 processd
1100 processd
1200 processd
1300 processd
1400 processd
class 0 background IU 0.97
class 1 aeroplane IU 0.01
meanIOU: 0.49068035245871083
It only shows score of two classes. Is that how it is supposed to be? If not can you send me the code you are using through mail?
It supposes to show all classes. Please double check your data and see if images are loaded correctly. You can also check the visualized results.
This is the code I'm using. No further modification.
Can you tell me the version of Python and all other packages required that you are using so that I can replicate that? The visualized results look fine to me. I'm guessing the problem to be in get_iou function.
Passing Semantic Class Augmented files as the ground truth solved the problem. I don't see a mention of it in the paper. Can you tell why is that used as the ground truth instead of Semantic Class files provided in the dataset? Passing that as the ground truth did not work.
Hi,
Sorry for the confusion. We did mention in the paper that we are using SBD as the training data for PASCAL VOC as it is the common setting for recent semantic segmentation methods.