hshustc/CVPR19_Incremental_Learning

Adaptive weight of the LC loss

wonda opened this issue · 4 comments

wonda commented

Hello,
I found the calculation of the adaptive weight of less-forget constraint different from the description in the paper. Did I misunderstand this part or miss some details?

lamda_mult = (out_features1+out_features2)*1.0 / (args.nb_cl)

Hello, I think there is no problem with the adaptive weight, the out_features1+out_features2 is the old classes number in the current session and the args.nb_cl is the novel classes number in the current session. The same as the description in the paper.

But I can't understand the use of class_mean in #5, do you think there is any problem?

wonda commented

@cxy1996 Thanks for your reply. In eq. (7) of the paper, the number of novel classes is the numerator, but in the code it is the denominator. Maybe there is a mistake in that equation.

@wonda You're right. I missed that.

Hello,
I found the calculation of the adaptive weight of less-forget constraint different from the description in the paper. Did I misunderstand this part or miss some details?

lamda_mult = (out_features1+out_features2)*1.0 / (args.nb_cl)

I also cannot understand this. And I found the result of this code for our-CNN is a little bit lower than our-NME. I don't know is it because I only run 1 run.