How to evaluation for testing datasets?
Closed this issue · 2 comments
YAOSL98 commented
How to evaluation for testing datasets?
DearCaat commented
For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.
- Cross-validation code:
cv-fold=3
for Camelyon-16,cv-fold=4
for TCGA-NSCLC. Complete Codes. - If u wanto evaluate for test set by yourself, u should train a model only with train set, and evaluate it with test set. This repo does not contain this codes. u can use the model api.
YAOSL98 commented
For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.
- Cross-validation code:
cv-fold=3
for Camelyon-16,cv-fold=4
for TCGA-NSCLC. Complete Codes.- If u wanto evaluate for test set by yourself, u should train a model only with train set, and evaluate it with test set. This repo does not contain this codes. u can use the model api.
Thanks a lot :)