No position encoding? Could you explain some your thoughts?
laisimiao opened this issue · 2 comments
laisimiao commented
No position encoding? Could you explain some your thoughts?
alihassanijr commented
Thank you for your interest.
It's common practice in hierarchical vision transformers that use local attention with relative positional biases not to use absolute positional encoding (i.e. Swin), and we simply followed that idea.
alihassanijr commented
Closing this due to inactivity. If you still have questions feel free to open it back up.