ziplab/LIT

the code of Figure 4. Figure 4: Attention probabilities of PVT-S with standard MSA in all Transformer blocks. Best viewed in color

Closed this issue · 2 comments

Hi, thank you for sharing. I want to ask for the code for the figure. Because I really can't do it by myself.
Many thanks if you can help me~

Hi @cyx669521, thanks for your interest!

I think you might refer to Figure 3? We actually don't have Figure 4 in our paper...

Whatever, we have uploaded some scripts as well as the pretrained model for the attention map visualisation. You can find the instructions here.

Cheers.

Thank you very much~