Attn layers to use
Closed this issue · 5 comments
Thanks for your great work. I have one question on the attention layers.
In the supplementary of the paper, it is mentioned that the middle layers exhibit a superior semantic.
But in the code, it seems that the layers in the up blocks are used. Does it means the code should be modified to be consistent with the paper results? Thanks.
Thank you for your interest.
No the code is correct and by middle layers we do not mean exactly middle layers, instead we mean those which are not at the end or beginning.
Thanks! I misunderstood that.
Hi, sorry to bother again.
I am curious about the size of test split in this work. For example, in Tab.2 and 3, the performances of SLiMe on 1-sample/10-sample settings are shown, and how many car and horse samples are used for test?
Thank you very much!
All the test images in the Pascal VOC dataset
Thanks!