qile2000/LAMDA-TALENT

Regarding Position Encoding

isCopyman opened this issue · 1 comments

May I ask if positional encoding of features is important in these Transformer based architectures? It seems that the vast majority of papers do not mention positional encoding

For tabular data, the features typically do not have a fixed order, meaning that swapping the order of the raw data features does not fundamentally change the dataset. Because of this, positional encoding is generally not necessary in Transformer-based architectures when working with tabular data. The absence of positional encoding in most papers likely reflects this characteristic of tabular data.