liuquande/SAML

Question about model selection.

BurningFr opened this issue · 2 comments

Hi,
I notice that in train.py the model is selected by the best test dice value on the "unseen" test domain, and most DG methods select the best model on source domain validation set.
But I don't find the train/val split in training data. Do you think is all right to use the best test dice value?

Hi,

Since different datasets could present large distribution shifts, the model obtained from the validation set in training domains often cannot perform well on the testing domain.
We therefore directly select the model on the testing domain to observe the general model performance on that particular testing distribution.

Thanks,
Quande.

Do you think domain generalization can select the model by the "unseen" test domain? I don't think so!

And In Issue #2, you mention that the datasets have been selected by the intra-domain experiments. Could you open source the case num you dropped for each domain?