DearCaat/MHIM-MIL

How to evaluation for testing datasets?

Closed this issue · 2 comments

How to evaluation for testing datasets?

For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.

  • Cross-validation code: cv-fold=3 for Camelyon-16, cv-fold=4 for TCGA-NSCLC. Complete Codes.
  • If u wanto evaluate for test set by yourself, u should train a model only with train set, and evaluate it with test set. This repo does not contain this codes. u can use the model api.

For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.

  • Cross-validation code: cv-fold=3 for Camelyon-16, cv-fold=4 for TCGA-NSCLC. Complete Codes.
  • If u wanto evaluate for test set by yourself, u should train a model only with train set, and evaluate it with test set. This repo does not contain this codes. u can use the model api.

Thanks a lot :)