The warm-up strategy for bottom-up estimation
Closed this issue · 2 comments
Hey, I notice that you mention that 'initial bottom-up estimates are not reliable', so you utilize warm-up strategy. I meet the same problem when I try to reproduce your work in pytorch. After several epochs the loss turns into 'nan'. Would you please let me know the certain codes in your work for this strategy? I try to find it but failed. And I will really appreciate that if you would please let me know if there is any possible solutions to avoid instable training for the bottom-up estimation.(I guess it is also the reason why you only use single-class image for this step?) Thank you so much!
Hi, thank you for your interest! The warm-up strategy is implemented in the operator:
ICD/core/model/layers_custom/icd.py
Lines 134 to 135 in f78286a
and called by:
Line 44 in f78286a
The core of this work is to exclude the disturbance of inter-class discrimination. Each intro-class discriminator only sees features belonging to the class it is responsible for. In other words, we should avoid asking it to discriminate features belonging to different foreground classes. This is why we update the bottom-up stage by only single-class images. Another possible way may be to exclude other classes' fg features by some masks, which may be derived from other classes' intra-class discriminators or the final estimations. But this way makes the pipeline too complicated and may cause another chicken or egg problem. Good luck!
Thank you so much for your work and help! It helps a lot!