Train / Val / Test split
Closed this issue · 2 comments
mboudiaf commented
Hi,
Thank you for the great work. I'm new to few-shot segmentation and I was just trying to get my head around how the data split is made. From the code, I seem to understand that the validation and test sets are the same, i.e the best model during training is picked based on its performance on the test set (with the novel classes). Am I missing something here ?
Thanks in advance,
Malik
tianzhuotao commented
@mboudiaf Yes. Your understanding is correct! There is no test set for the evaluation of few-shot segmentation and this is also the reason why cross-validation is performed on the validation set with different splits on base and novel classes.
mboudiaf commented
@tianzhuotao Thank you very much for your answer, and congrats again on this work :)