floodsung/LearningToCompare_FSL

Select the model based on testing accuracy?

PatrickZH opened this issue · 7 comments

Thank you for providing the code!
I have a concern about the model selection in your miniimagenet_train_few_shot.py.
Line 260: It seems that the best training model is selected as the one with best testing accuracy (not validation accuracy) ?

It's based on meta validation set: See 1, 2, 3

@ehsanmok ,hi, however ,in the
https://github.com/floodsung/LearningToCompare_FSL/blob/master/miniimagenet/miniimagenet_train_few_shot.py#L15
you just use the "task_generator_test", not the "task_generator"……

Right! not a good code. It's the third mistake along with not using model.eval() and using the same normalization for omniglot and mini-imagenet!

However, it's done correctly for one shot here. Based on the copy pasting attitude of the code maybe it was changed at the time of training and when released the code wasn't carefully done!

@ehsanmok Hello! I think that although the code uses "task_generator_test" instead of "task_generator" in miniimagenet_train_few_shot.py, it doesn't influence the result of model training because "metatest_folders" is only used for monitoring generalization performance. It doesn't participate in the process of model training.

I would like to ask, is there a problem with the model selection based on omniglot? Should be based on the accuracy of training to choose it

When selecting a model, the test data is unknown, so the accuracy of the test cannot be used to select the model.