MehmetAygun/4D-PLS

Question about testing of the pre-trained model

minghanz opened this issue · 2 comments

Hi! May I ask what are the expected testing performance of the released pretrained model? I got LSTQ 62.14 on the validation set using the provided model, which is slightly lower than the reported LSTQ 62.74 in the paper. Is the 62.14 number also what you got from this released model? Is the small gap with the number in paper an expected randomness due to different runs of the training, or I am not configuring the testing correctly? (I also retrained a model from scratch, which also gives a consistent 62.14 LSTQ. )

My testing setting is:

    config.global_fet = False
    config.validation_size = 200
    config.input_threads = 0
    config.n_frames = 4
    config.n_test_frames = 4 #it should be smaller than config.n_frames
    if config.n_frames < config.n_test_frames:
        config.n_frames = config.n_test_frames
    config.big_gpu = True
    config.dataset_task = '4d_panoptic'
    config.sampling = 'importance'
    config.decay_sampling = 'None'
    config.stride = 1
    config.first_subsampling_dl = 0.061

which I did not change from the provided code. Especially, I want to ask about the first_subsampling_dl term. Is there a reason why it is 0.061 instead of 0.06, as in the training configuration? I am testing with a V100 GPU with 32GB memory.

Thank you!

Yeah due to some randomness in the code (like the sampling process) I expect that the difference is normal.
0.061 is weird, it should be 0.06, I used default parameters from here most of the time: https://github.com/HuguesTHOMAS/KPConv-PyTorch/blob/master/train_SemanticKitti.py

Thanks!