yaoyao-liu/meta-transfer-learning

Results on backbone ResNet-12 without pre-training.

LouieYang opened this issue · 2 comments

Hello, I find that pre-training in TADAM has a great influence on the final accuracy. (Abount 4% on miniImageNet). Could you also share the performance of MTL (ResNet-12 w/o pretraining) ?

Many thanks.

Hi @LouieYang,

Thanks for your interest in our work.

Our MTL aims to reduce the number of learnable parameters by using the scaling and shifting weights of a pre-trained model. If you don't use a pre-trained model, MTL will update the scaling and shifting weights of a fixed and randomly initialized model. I think this setting is not reasonable for MTL and we don't have these results.

If you hope to get these results, it is easy to run the setting ('ResNet-12, w/o pretraining') based on the TensorFlow implementation. You may start the meta-training without loading a pre-trained model using this repo.

Best,
Yaoyao

This explanation addresses my confusion. Many thanks.