ImageLinearAttention showcase
monajalal opened this issue · 0 comments
monajalal commented
Could you please show how you make use of ImageLinearAttention for image classification in combination with ViT? do you replace it with SelfAttention or do you use besides it? Any representative example is really appreciated. I want to use it in ViT for images.
class EncoderBlock(nn.Module):
def __init__(self, in_dim, mlp_dim, num_heads, dropout_rate=0.1, attn_dropout_rate=0.1):
super(EncoderBlock, self).__init__()
self.norm1 = nn.LayerNorm(in_dim)
#self.attn = SelfAttention(in_dim, heads=num_heads, dropout_rate=attn_dropout_rate)
## note: not sure how exactly I pass the params
self.attn = ImageLinearAttention(in_dim, heads=num_heads, dropout_rate=attn_dropout_rate)
## rest of code