jnhwkim/ban-vqa

Attention Visualization

mokadyr opened this issue · 1 comments

Hi,
Love your work and repository

Just want to now how can I get the attention visualization? (like Figures 3,4 in the paper)

Unfortunately, we do not provide the code for the visualization. Instead, you can try your own using att in

att, logits = self.v_att.forward_all(v, q_emb) # b x g x v x q

att is a 4-dimensional tensor having attentional weights, (batch) x (glimpse) x (visual) x (language). Happy coding! 👨‍💻