katerakelly/pytorch-maml

About the Model parameters updating in OmniglotNet Class

Opened this issue · 1 comments

Thanks for your good implementation of MAML, however, I think that maybe use state_dict() and load_stat_dict() is much faster than modifying the weights (in omniglot_net.py 43), can I first deepcopy the net parameters(state_dict()) and use the fast weights (also use a optimizer to update), then load the origin parameters back to update the meta learner? Thanks.

I also wanted to do that ,but the grad can not backword and parameters cannot update. is the most important that to share grad between two models??