Why does standard branch use logit adjustment in the computation of L_con?
machengcheng2016 opened this issue · 2 comments
Greetings!
According to your paper, Eq (1) and (2) say that the standard branch follows the FixMatch training, and there is no logit adjustment at all.
However, according to your code, the standard branch actually use logit adjustment (args.adjustment_l1
). I wonder why the inconsistence exists?
Thank you for your attention. The code here is related to our second finding, that to learn a better feature extractor, the accuracy of pseudolabels is critical. The value of args.adjustment_l1 will change as the training progresses. Actually the standard branch follows the FixMatch training in addition to using adjustment when generating pseudo-labels. I apologize for the misunderstanding caused by our related content in our paper and I hope this will help you understand our method.
Yeah, yesterday I find the pseudo-code of ACR in the supplementary PDF file. Now I know the exact form of loss function of training standard branch.
Thanks for your explanation~