ma-xu/pointMLP-pytorch

Local Grouper some problem

root116688 opened this issue · 6 comments

Hi, very thank you to proivde the code, and I have few question want to know below.


1.What is different center and anchor?


grouped_points = self.affine_alpha*grouped_points + self.affine_beta

2.What alpha and beta learnable parameters exacalty do ?


new_points = torch.cat([grouped_points, new_points.view(B, S, 1, -1).repeat(1, 1, self.kneighbors, 1)], dim=-1)

3.I don't really understand why need concat new_points?


new_points = torch.cat([grouped_points, new_points.view(B, S, 1, -1).repeat(1, 1, self.kneighbors, 1)], dim=-1)

  1. Why return new_xyz, but in Model forward never used?

image
5.How [512,128] convert to [256,24,128]
6.Please tell me how does it work the one stage to next stage?
7.And why repeat four times?

Thank you very much!

ma-xu commented

@root116688 Thanksfor your interest and detailed questions. Here are the responses.

  1. center means the mean value of the local points and anchor means the the selected center point.
    We always use anchor in our implementation.
  2. Alpha and beta can be considered as a common practice in normalization technologies, scaling and shifting the values.
    Refer BN.GN,LN, etc.
  3. We notice that this strategy would achieve better performance.
  4. it is used in
    xyz, x = self.local_grouper_list[i](xyz, x.permute(0, 2, 1)) # [b,g,3] [b,g,k,d]
  5. Using the local grouper. How [512,128] convert to [256,24,128]? We selelct 256 points from 512 points, and consider the 24 nearest points for each selected one. The resulted tensor would be [256,24,128]
  6. Same as Q5.
  7. What do you mean by "repeat four times"? If you mean four stages, that is a common practice in network design. Other designs would be okay, but the perfomermance may vary.

Let me know if you have any further questions.

@ma-xu
1. Is "anchor" provide higher accuracy?
2. How ones and zeros to do normalization?

  1. Thanks i got it!
  2. My mistake, I forget it is in for loop.
  3. Thanks I finally understand.
  4. Thanks
  5. Thanks you again~
ma-xu commented
  1. Yes, empirically, it will achieve better performance.
  2. It is not ones & zeros. See here (line177-179):
    grouped_points = (grouped_points-mean)/(std + 1e-5)

Thank you for reply many times.

Sorry, ones and zeros i mean self.affine_alpha and self.affine_beta , I don't understand how is this value do normalization.
Is it Layer Normalization?

this line

grouped_points = self.affine_alpha*grouped_points + self.affine_beta

ma-xu commented

They are learnable.

Thank you again~