Aleph-Alpha/magma

```build_labels``` includes masked image tokens?

Opened this issue · 3 comments

Hi Authors,

in these lines, the function build_labels masked all the labels in positions up to the seq length of the embeddings. What differences would it make if one just use the caption?

To be more specific, now the code build a label with first part of the sequence (which has sequence length the same as the image) all set to -100, then the second part would be the actual text labels. Why would we need all the -100s? Why couldn't we just use text label ids?

Thanks a lot!

CoEich commented

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

Hi,

positions with index -100 get ignored by default when using torch cross entropy loss. We don't want a loss on these positions because the model is supposed to predict the text.

Best,

Constantin

Yeah I get it. But why mask at all? Why not just cut that part out and only use the text that you are predicting?

CoEich commented

I guess you are right, one could also truncate the logits and labels from the left for the amount of image positions.

At least we need some logic to mask the right-padding: since the amount of right-padding varies per batch item we cannot simply cut this part without breaking up the batch which would be awkward. Using the same function to mask the image positions seems like the most clean solution to me.

In the end it is a matter of taste :-)