The purpose of self.embedding_maskfeature
Opened this issue · 3 comments
Deleted user commented
Thank you for the impressive repo!
I am concerned about the self.embedding_maskfeature in your code. In your code I see that this module is something like you just decrease then increase the number of channel, is it right? or is there any other purpose of this module?
ymq2017 commented
Hi, self.embedding_maskfeature is used for adding some trainable parameters to the learning of maskfeature.
Since the maskdecoder is fixed and we find that adding learnable parameter on maskfeature can improve the perfomance of results on HQ-Seg44K dataset.
Deleted user commented
Thanks for replying!
Deleted user commented
Thank you for replying; I also have a question about your ablation study in
Table 2: What does SAM + HQ-Output Token mean? I understand that you just
added a learnable token is self.hq_token, then dot product with
upscaled_embedding, is this right? If it's right, *what is *different*
between self.hq_token and self.mask_tokens.*
Thanks so much!
…On Sat, Jan 27, 2024 at 9:39 PM Mingqiao Ye ***@***.***> wrote:
Hi, self.embedding_maskfeature is used for adding some trainable
parameters to the learning of maskfeature.
Since the maskdecoder is fixed and we find that adding learnable parameter
on maskfeature can improve the perfomance of results on HQ-Seg44K dataset.
—
Reply to this email directly, view it on GitHub
<#117 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AYPYEA2MHPRRYF5XUWVMZDLYQUGRHAVCNFSM6AAAAABCGKQIPKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJTGE3TKNBQGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>