tensorflow/text

Redundant `create_padding_mask(inp)` masks

markub3327 opened this issue · 0 comments

Hello,

in your tutorial about Transormer I think there are redundant create_padding_mask(inp) masks. The same mask is created for Encoder and Decoder too. Please look here:

    # Encoder padding mask
    enc_padding_mask = create_padding_mask(inp)

    # Used in the 2nd attention block in the decoder.
    # This padding mask is used to mask the encoder outputs.
    dec_padding_mask = create_padding_mask(inp)

It's the same padding mask created from inp. It's memory inefficient.

Link: https://www.tensorflow.org/text/tutorials/transformer#create_the_transformer

Thanks.
Have a nice day.