mboudiaf/RePRI-for-Few-Shot-Segmentation

mIoU on validation sets during training, i.e., 61 class mIoU

zhiheLu opened this issue · 2 comments

Hi, thanks for your good work. I guess the performance of the pre-trained model is vital for the later few-shot task. So, can you provide the mIoU of validation sets during training? Should have four values for four splits of each dataset.

Hey,

Thanks for your interest. Here are the validation curves:

Pascal-5i folds (i.e on the 15+1 training classes ) for 100 epochs with a PsPNet-Resnet50) and the same hyperparameters as described in the paper:
val_mIou

Coco-20i folds (i.e on the 60+1 training classes ):
val_mIou

Interestingly, the few-shot results seem to be inversely correlated to those validation results (for instance, split 1 has the worst validation mIoU, but the best mIoU with few-shot tasks for both Coco and Pascal). Intuitively, that's understandable, as a high validation performance can mean that the model has well specialized to the 16 (or 61) classes seen during training (in a sense has overfitted those classes) , while the few-shot tasks are sampled from the unseen classes.

Ps: By setting the option episodic_val to False in the .yaml files, the validation will be made on the 16 (or 61) classes used for training and you should now be able to reproduce those curves.

Malik

Thanks for your reply. It seems non-overfitting on base classes is vital for generalized few-shot tasks.