How do you update the base_leaner's parameters
xugy16 opened this issue · 3 comments
Thank you for the code.
I have a question about the base_learner update.
- The base-learner is fast updated using 100 steps.
- then we return qry_logits and calculate the cross entropy loss for qry_set
- using sel.optimizer to update.
But what gradient is stored in base_learner? Because you use fast-model to calcualte qry loss
Thanks for your interest in our work.
The fast model can be regarded as a function of the base learner. Thus, we can calculate the derivative of the query loss with respect to the base model. We update the base learner using Eq. 5 in our paper.
If you have any further questions, please feel free to contact me.
Best,
Yaoyao
Really appreciate for the response.
So you are using 1st-Order MAML to update the classifier-head (base learner)?
Yes. We use the first-order approximation MAML to update the FC classifier.
If you have any further questions, please do not hesitate to contact me.