Question about model selection.
BurningFr opened this issue · 2 comments
BurningFr commented
Hi,
I notice that in train.py the model is selected by the best test dice value on the "unseen" test domain, and most DG methods select the best model on source domain validation set.
But I don't find the train/val split in training data. Do you think is all right to use the best test dice value?
liuquande commented
Hi,
Since different datasets could present large distribution shifts, the model obtained from the validation set in training domains often cannot perform well on the testing domain.
We therefore directly select the model on the testing domain to observe the general model performance on that particular testing distribution.
Thanks,
Quande.