Can you please explain what you are trying to do with this attenution before the inner loop?
AyanKumarBhunia opened this issue · 2 comments
# Attenuate the initialization for L2F
if self.args.attenuate:
task_embeddings = self.get_task_embeddings(frames, task_id, names_weights_copy)
names_weights_copy = self.attenuate_init(task_embeddings=task_embeddings,
names_weights_copy=names_weights_copy)
-
Can you please refer to some paper from where you took the inspiration regarding attenuation?
-
Can you please comment on how much the performance would drop if we do not use that and just follow off-the-shelf MAML?
I read a paper regarding this attenuation technique in 'Learning to Forget for Meta-Learning', CVPR 2020 (probably you all belong from the same research group).
Thanks?
Hi,
Thanks for the interest in our paper.
Yes, as you have said, 'Learning to Forget for Meta-Learning' came from our lab, and I'm the first author.
-
You can find the motivation and methodology details on attenuation from the paper you mentioned. If you have more questions on this, you can directly e-mail me (dsybaik@snu.ac.kr) or you can open an issue on its github page:
https://github.com/baiksung/L2F -
The results reported on the paper are produced with the original MAML methodology. We have included L2F (attenuation) implementation as well just for the reference.
Hope this answers your questions.