ma-xu/pointMLP-pytorch

some problems about global context and cls_token

mmiku1 opened this issue · 2 comments

Hi @ma-xu
[gmp_list.append(F.adaptive_max_pool1d(self.gmp_map_listi, 1))](

global_context = self.gmp_map_end(torch.cat(gmp_list, dim=1)) # [b, gmp_dim, 1]
)

x = torch.cat([x, global_context.repeat([1, 1, x.shape[-1]]), cls_token.repeat([1, 1, x.shape[-1]])], dim=1)

The features of each layer of the encoder are concat. After max pooling,the feature is concat with the last layer of the decoder.
Why not concat with each layer of the decoder?

Thanks.

ma-xu commented

@mmiku1 It could be, but unnecessary. Empirically, this setting can achieve promising performance.

Thank you for your answer. Wish you a happy life!