yuxumin/PoinTr

Fidelity and MMD metrics

sand2sand opened this issue · 12 comments

Thanks for you good work! And I look forward to receiving your answer about my some question on metrics.
Both PF-Net and PoinTr concat input and prediction as output, so it looks like using asymmetric Chamer_distance when calculate Fidelity, but why PF-Net on that metric greater than 0.
Whether to use symmetric CD and whether select the object with the minimal value of the prediction on CD from the PCNCars test dataset when calcaulate MMD.
Could you please descript the more detail of caculation on Fidelity and MMD metrics.

Hi, sorry for the late reply.

Both PF-Net and PoinTr concat input and prediction as output, so it looks like using asymmetric Chamer_distance when calculate Fidelity, but why PF-Net on that metric greater than 0.

We make a mistake. We directly cited the metric from "ASHF-Net: Adaptive Sampling and Hierarchical Folding Network for Robust Point Cloud Completion" during our submission for CVPR 2021, and when we recycled the paper for ICCV 2022, this mistake was neglected.

Whether to use symmetric CD and whether select the object with the minimal value of the prediction on CD from the PCNCars test dataset when calcaulate MMD.

Yes, please ref to https://github.com/yuxumin/PoinTr/blob/master/KITTI_metric.py for detailed calcuation process for these two metrics.

Thanks for your reply very much.
I have resolved it according to your code, and that's a perfect project.

Hi, I'm sorry to bother you again.
We have trained PoinTr on PCN dataset with two 3090 from scratch currently, but the performance of l1 cd is 7.79 compared to your pretrained model.Have you fixed the bug in datasets/PCNDataset.py in your latest project and what's the difference between PCN and PCNv2?

@yuxumin Hi, bro. I am still waiting for your reply.

I missed this issue and feel sorry for the late reply.
Yes, in fact, PoinTr can achieve CD 7.26 on PCN after fixing the bug ( while we report CD 8.38 in ICCV paper).

The difference is only the bug in dataloader as you mentioned.

But after comparing the code line by line, I found that there is only a difference in the naming of the function for upsample input points. So could you specifically point out the differences between these two paragraphs at the mentioned bugs. I'm sorry to trouble you.

The PCN.pth is trained on the GRNet codebase. PoinTr codebase is made after the paper accepted by ICCV. So you can not find this bug and the modification history here :)

I can understand what you mean about the bug now. So the current difference between them is the way to upsample inputs, and is 7.26 based on PCNV2 or PCN?

That is. PCNv2 is for snowflakeNet ... I find SnowflakeNet perform not well with default PCN dataset.

Hi, I also trained PoinTr on the PCN dataset, but the performance of L1 is still around 7.79, unable to reach 7.26. Have you solved this issue?