The layer structure and mask
ayushais opened this issue · 1 comments
ayushais commented
Hi,
Thanks for this contribution. In the implementation of attn_mlp
the first linear layer increases the dimension. Is this a standard practice because I did not find any details about this in the paper. Also paper also does not describe use of mask
, is this again some standard practice for attention layers?
Thanks!!
toannguyen1904 commented
I think the mask is used in some cases similar to Transformer in NLP, if you need it.
If you don't have any special purposes, just set the mask to all ones.