About comparison fairness and dataset splitting
Closed this issue · 2 comments
Dear Authors,
Thank you for your invaluable contributions to this repository. I am currently exploring the field of time series imputation and have encountered some aspects regarding the evaluation protocols that I believe could benefit from further discussion.
- Dataset Splitting: The choice to split the dataset chronologically is well-suited for time series forecasting to prevent data leakage. However, for imputation tasks where the goal is to address the missingness in available data, such splitting may not be necessary. Given that the primary concern in imputation is dealing with inherently missing data, a non-chronological split might be more appropriate as it reflects real-world scenarios where all available data is subject to imputation, instead of the recent ones.
- Evaluation Comparisons: The evaluation process raises somewhat questions about fairness and consistency across different methods. We compare for instance the Transformer and mean imputer. While the Transformer model is assessed using test data, the approach for evaluating a mean imputer remains unclear. Should the mean imputer also have access to the test data since the non-missing data in the test data should also be available in model serving? There are two options:
- training the mean imputer on the train-eval set is unfair since the non-missing data in the test set should be available for the mean imputer too, which does not cause leakage and has been exploited by nn models.
- training the mean imputer exclusively on the test set does not leverage the potentially informative train-eval sets, which seems equally unfair.
In view of these points, I suggest the following:
- For generalized imputation methods like those in HyperImpute, should we maintain merely the unavailability of missing values in the test set while considering the rest of the data as usable (including the non-missing values in the test data, the train and eval data)?
- Could we use a non-chronological train-val-test split, given that in practical applications, the emphasis is on imputing the entire dataset rather than the recent months? More importantly, in the case of missing value imputation, the non-missing data is often unavailable (kindly see the protocol of HyperImpute for reference)
I look forward to your insights and any suggestions you might have on aligning the evaluation framework with real-world imputation tasks.
Best regards,
Hao
Hi there 👋,
Thank you so much for your attention to PyPOTS! You can follow me on GitHub to receive the latest news of PyPOTS.
If you find our research is helpful to your work, please star⭐️ this repository. Your star is your recognition,
which can help more people notice PyPOTS and grow PyPOTS community.
It matters and is definitely a kind of contribution to the community.
I have received your message and will respond ASAP. Thank you for your patience! 😃
Best,
Wenjie
Hi @HowardZJU, thanks for raising the discussion here.
- Splitting datasets chronologically will be necessary for all time-series modeling tasks if you want to use the model for future data. Splitting is not only a way to avoid data leakage but also to prevent overfitting (the two may stand for the same thing in some cases). In the time-series imputation field, there are two scenarios, i.e. in-sample and out-of-sample. For the in-sample case, as you've said, there is no necessity to split data chronologically. But for out-of-sample, it is vital because we need to ensure the generalization ability of the trained models. Obviously, we take the latter one in our experiments, not only because imputation generalization is important to deep learning algorithms, but also because out-of-sample imputation is commonly used in real-world applications;
- In our current implementations of PyPOTS naive imputation methods, e.g. the mean imputer, they don't have the training stage. Hence, if you'd like to make them calculate empirical values on both the training set and the test set, you can merge the sets together to feed into the imputers;
You ask good questions and provide helpful insights here. We appreciate that. If you also work with POTS (partially-observed time series) data, we sincerely invite you to join our community and build PyPOTS better together ;-)