WangYueFt/dgcnn

Potential discrepancy between training and testing for part segmentation

imankgoyal opened this issue · 5 comments

Dear Wang,

I really liked your paper and thanks for sharing your code. I think there is a potential discrepancy between the training and test setup for part segmentation. It would be great if you can please have a look and clarify a few doubts I have.

Looking forward to your response.
Best,
Ankit

@syb7573330 Can we look at if there is a bug in the segmentation implementation?

Hi @WangYueFt and @syb7573330,
I was wondering if you got a chance to look into the issue.

I will check later. Thanks

Hi @syb7573330 , I had one other question which is in continuation to #8 issue.

Can you please confirm what do you exactly mean by "the best results during the training process". It looks like a model is saved after every 5 epochs (40 times during the training process). So is the final reported test set result max among all the 40 saved models? Also, am I right in assuming that the metric over which you take max is mean Instance IOU?

https://github.com/WangYueFt/dgcnn/blob/master/tensorflow/part_seg/train_multi_gpu.py#L381

Thanks!

  1. We used the same training & testing code and preprocessed data as in PointNet for fair comparison. Please check their code and data.

  2. "pc_augment_to_point_num()" is for padding purpose. You are right, when detecting neighbors, duplicated points may be included. But the number of duplicated points should be very small compared to the total number of neighbors, so I think this effect is minor.

  3. You are right. Please see detailed calculation in part_seg/test.py