what is the point of loop dataset 30 time during training
ZheningHuang opened this issue · 1 comments
Hi, Thanks for the amazing work.
I wonder if you could share your thoughts behind looping the dataset 30 times in the training process. Would this case be overfitting issues? (As far as I can see overfitting does occur in the training process where mIOU on training sets reaches 0.95+)
Thanks,
Zhening
Hi @ZheningHuang,
I wonder if you could share your thoughts behind looping the dataset 30 times in the training process. Would this case be overfitting issues?
In fact, the looping of the dataset is already introduced when I got this codebase from the first author of Point Transformer.
In my understanding, the looping can prevent the training data loader from being initialized too frequently (especially, for the small dataset, e.g., S3DIS), which eventually reduces the total training time.
Of course, as you mentioned, there probably could be a redundant mini-batch with the looping.
For example, with the batch size of 4, there may be cases where all of the four examples are the same.
However, these cases are rare, and they hardly affect the final performance of the model on the validation or test dataset.
As far as I can see overfitting does occur in the training process where mIOU on training sets reaches 0.95+
I think the reason for this phenomenon is may the size of the training dataset rather than the looping. To check further, you can try to train a model on the ScanNet dataset, which is larger than the S3DIS dataset.