Question about the "share_planes"
Opened this issue · 0 comments
BuLingBin commented
Hi, it is a nice work!
But I am confused about the "share_planes" in PointTransformerLayer.
n, nsample, c = x_v.shape; s = self.share_planes
x = ((x_v + p_r).view(n, nsample, s, c // s) * w.unsqueeze(2)).sum(1).view(n, c)
Apparently, w's dimension is reduced by Linear, which is not illustrated in the paper. I think this operation is not consistent with the vector attention, It is more like a compromise of scalar attention and vector attention.
Why partition the feature dimension of (x_v + p_r) into share_planes?