0aqz0/SLR

attention model

HaminyG opened this issue · 3 comments

Excellent job!! Cause I am a fresh bird in this area, could you please tell me what is l and g which are as inputs for linearattentionblock?

0aqz0 commented

I add the attention blocks and refer to this paper: Learn to Pay Attention(ICLR2018). And this figure shows the architecture in the paper.

image

To my understanding, l represents features extracted from different layers and g represents features extracted from the final layer.

You can check out this paper. In my job, I try to add similar attention blocks to 3D ResNet. But the result now is a bit worse than 3D ResNet without attention :) and I am still tuning the model.

0aqz0 commented

You're welcome.