ma-xu/pointMLP-pytorch

Impossible shape during ablation study

ShenZheng2000 opened this issue · 2 comments

Hello, authors! I am confused about Table 5, where you did an ablation study including the scenario with Φpos and Affine but no Φpre.

Based on lines 337-340, for the complete model, the shape of x should look like follow during processing.
[b,g,k,d] -> Affine -> [b,g,k,d] -> Φpre -> [b,d,g] -> Φpos -> [b,d,g]

If we remove Affine, the following still works:
[b,g,k,d] -> Φpre -> [b,d,g] -> Φpos -> [b,d,g]

If we remove Φpos, the following also works:
[b,g,k,d] -> Affine -> [b,g,k,d] -> Φpre -> [b,d,g]

However, if we remove Φpre, the following would be impossible since Φpos cannot process an input x with a shape of [b,g,k,d].
[b,g,k,d] -> Affine -> [b,g,k,d] -> Φpos -> [b,d,g]

I hope you can clarify my doubts.
Thanks!
Shen Zheng

ma-xu commented

@ShenZheng2000 Thanks for your detailed description.

It's simple to achieve this ablation by simply setting pre_blocks=[0, 0, 0, 0] in model configuration. Here is an example

return Model(points=1024, class_num=num_classes, embed_dim=64, groups=1, res_expansion=1.0,
                   activation="relu", bias=False, use_xyz=False, normalize="anchor",
                   dim_expansion=[2, 2, 2, 2], pre_blocks=[0, 0, 0, 0], pos_blocks=[2, 2, 2, 2],
                   k_neighbors=[24, 24, 24, 24], reducers=[2, 2, 2, 2], **kwargs)

We remove the residual pointmlp block, but the aggregation function is still avaliable. Hence, the tensor shape will still be [b,d,g] as you described.

Thanks for your timely reply. Really helpful for me. 😄