POSTECH-CVLab/point-transformer

Reproducibility

Opened this issue · 7 comments

Hello,

I tried to reproduce the results of Point Transformer for semantic segmentation using your repository, and I could not reach the reported 70 mIoU on S3DIS Area 5. I used the data from PAConv and PointWeb and reach 67.5 mIoU for the best model and 68 mIoU for the last model.

Can you please provide a pretrained model or at least the evaluation log.

Best regards,
Hani

I achieve mIOU 68.5 in the best model, 66.54 in the last model, train on 4 RTX 8000 GPU for two days

I achieve 69.8 for the best miou

I's mine.

Best Iou: 0.674

What about your config settings? Could you share your settings? My Iou is only 68.1

Same problem here, the Best miou for me is 67.9. Can you please share you configs please.

I achieve 69.8 for the best miou

I trained on 2 RTX3070ti for two days, changed the 'max voxel size' from 80k to 60k, and reduced batch size from 16 to 2.
This is my result:

Model mAcc OA mIoU
CVLAB 76.8 90.4 70.0
Mine 75.41 88.68 67.64

I guess there are two reason for my lower result: first, the batch size , second , the voxel size. It seems that using default config setting will get a better result.

Why do I end up with a.npy file instead of a point cloud file