Additional position embedding increase parameters of transformer?
ken-ando opened this issue · 0 comments
ken-ando commented
This work introduces additional positional embedding for the number of tokens more than 512.
PreSumm/src/models/model_builder.py
Lines 150 to 154 in 70b810e
But, this code doesn't seem to extend transformer.
I think if the subsequent encoder does not have additional parameters, the shape will not match.
So, I guess the transformers automatically add the parameters of transformer, is this understanding correct?