zaiweizhang/H3DNet

Finetunning H3DNet on Sub ScannetV2 ( 3 class only)

giangdip2410 opened this issue ยท 8 comments

Hi Zhang,

I am trying to finetune your model on sub dataset of Scannet V2 ( I pick only most 3 popular classes ). Do you have any suggestion for me ? I tried to freeze your weight and only train last layer but the mAP was not increased.

Thank you very much,

I actually not sure training on 3 classes alone will guarantee you a performance boost. Can I get a bit more insight? When you are fine-tuning, you said mAP was not increased. Is there a performance drop or it maintains the same?

Hi bro @zaiweizhang , thank you so much. I found my issue, first time I use load weights with strict=False as suggest of Pytorch author. But It dose not work (map decrease strongly ). So I try to load weights layer by layer , and drop all layer dose not match with your pretrained model. after finetuning mAP (IOU 0.5) increases 4% vs your model.

But when I check with my test data, the model is not stable, same point cloud data, but when I repeated to run, the results are very different, some good, some bad. I think maybe because of sample point cloud to 40k, but do you think so ? Do you think any solution for having stable good results on test data. If I fix seed , it maybe stable but not sure it will be good results.

Thank you very much.

For "check with my test data", you mean your customized data right? Your input sampling strategy may make a difference. You can try one sampling, fix your input and test multi-times on the pretrained model. If the results vary a lot, it means that there are something unstable in the pretrained model. If the results do not vary a lot, it means that your input sampling strategy is unstable.

When I was testing with ScanNet/SUNRGBD, the results vary a little bit across each run. (around 1%)

@zaiweizhang :you are correct, I tested with custom data. I understand your point, in my case my original is point cloud data consisting of 5m points, if I fix the sampling strategy ( I fix the seed) the result is stable. So I think the instability comes from sampling the data from the original data. Because my data consists of 5m points, after sample to 50k points, sometimes the model predicts well, sometimes the model predicts bad. Do you have ideas/suggestions for me to fix this?
Thank you very much.

What is your sampling method? For pointcloud, the common practice is to use furthest point sampling. If you are not using that, I would suggest you try that. In addition, you can also try larger input point cloud size. For 5m, maybe you can try 80k and 100k points. Hope it helps!

@zaiweizhang : Thank you very much . Currently I only use numpy.random.choice to make point sampling, maybe I should try other algorithm for sample point. I will also try more like 80k or 100k sampling points, but my concern is the model was trained on data with 40k point cloud, so if I use the model to test for data 100k points, the input must be sample to 40k points to be able to fit to model, am I right ?

If you are using numpy.random.choice, I would highly suggest you use the furthest point sampling method. I am confident that it may cause the issue. Pointnet can take variable sized input. You do not need to sample to 40k for it to work.

@zaiweizhang : thank you very much for your suggestion. I will try to use the furthest point sampling method. Have a nice weekend.