fcdl94/WILSON

The reproduced results on COCO-to-VOC

Richardych opened this issue · 2 comments

Hi Fabio,

I would like to ask some questions regarding the COCO-to-VOC results.
I perform the dataset preparation step according to the README.
Then, I run the provided coco.sh to finish the step-0 and step-1 training.

However, the evaluation on coco only achieves 28.59 mIoU. I don't know why the result is too poor compared to your paper result(COCO-All: 40.6), the detailed class IoU are reported as follows.

BTW, since step-1 is trained on voc, how to get the results on COCO(61-80)?
Thanks!

Total samples: 5000.000000
Overall Acc: 0.736741
Mean Acc: 0.355343
Mean Prec: 0.416170
Mean IoU: 0.285934
Class IoU:
class 0: 0.7803624638985466
class 1: 0.0
class 2: 0.8443482556844486
class 3: X
class 4: 0.640725418774259
class 5: 0.3691700312387975
class 6: 0.0
class 7: 0.5545383087403365
class 8: 0.891710913127037
class 9: 0.7955269627722906
class 10: X
class 11: 0.4523464669711819
class 12: X
class 13: 0.05027925978293037
class 14: 0.5580934766980988
class 15: 0.07868707454509988
class 16: 0.022431797846528094
class 17: 0.013974576139435054
class 18: 0.18439521508467552
class 19: 0.321027624490874
class 20: 0.0
class 21: 0.21032159246325077
class 22: 0.10696259954614531
class 23: 0.2789628011267344
class 24: 0.2723383824462526
class 25: 0.0
class 26: 0.3858164062446917
class 27: 0.29245244264889564
class 28: 0.15235938683646286
class 29: 0.1923896367580217
class 30: 0.333419450500088
class 31: 0.5790551316016739
class 32: 0.4472113690318672
class 33: 0.4015514901863279
class 34: 0.6025023241144885
class 35: 0.5289954625680039
class 36: 0.48586341141865
class 37: 0.4239569210117141
class 38: 0.7029962077919376
class 39: 0.5575726970141223
class 40: 0.426860774277874
class 41: 0.0
class 42: X
class 43: X
class 44: 0.4088950087365174
class 45: 0.34208602247562386
class 46: 0.4019400003314785
class 47: 0.4142366906996078
class 48: 0.48391029888071846
class 49: 0.4874306930019723
class 50: 0.07728368419808865
class 51: 0.4535597597215896
class 52: 0.6458672529570286
class 53: X
class 54: 0.6269001245980994
class 55: 0.4486133177752055
class 56: 0.5265071915564131
class 57: 0.6922550138506051
class 58: 0.0
class 59: 0.3159855859270987
class 60: X
class 61: 0.001524282809864177
class 62: 0.0005324307628895296
class 63: 0.004115715609509475
class 64: 2.057941558961252e-05
class 65: 7.361325183386447e-05
class 66: 0.014190666487536205
class 67: 0.011869154551484235
class 68: 0.0
class 69: 0.00020963490414156724
class 70: 0.013198314935404704
class 71: 0.0004973000746272396
class 72: 6.932937232995318e-05
class 73: 0.004596710617840505
class 74: 0.07932975085118488
class 75: X
class 76: 0.022219097505168584
class 77: 0.0005423444474400956
class 78: 0.00019570246468256935
class 79: X
class 80: 0.16938774078738097
Class IoU:' : 0.2859340187075097
Class Acc:' : 0.35534273802912
Class Prec:' : 0.41616983578468136

Hey @Richardych !

So, from your output, it seems that you are not learning properly the new classes (they have performance close to 0).
Did you convert the VOC Labels to the COCO class indexing? You can easily do that with the script in data/makecocovoc.py.

Regarding the results on COCO: VOC classes are a subset of the COCO ones. In step 0, we are learning all the classes not in VOC on the COCO dataset, and in step 1 we learn the VOC ones.
By default, the code will test on both datasets.

Does it help?

@fcdl94

Thanks for the quick reply.
I have solved the problem, because I forgot replaced the annotations/ with the generated annotations_my/.