New question regarding the multistep prediction strategy
rodgdutra opened this issue · 2 comments
I was reading and debugging the multi-step implementation to understand it better. I've come across an interesting thing, seams like the features and labels in the training and evaluation are the same. This behavior is correct ? I thought that in a multi step prediction problem the input features is delayed in relation to the wanted labels, this way we have a window of past behavior of the data and we are aiming to predict the future behavior of the data.
One can observe this in the lines:
transformer-time-series-prediction/transformer-multistep.py
Lines 91 to 92 in 570d39b
In this case, it is correct. Depending on the training mode, the loss is calculated over the output (prediction window) or the whole sequence:
transformer-time-series-prediction/transformer-multistep.py
Lines 177 to 180 in 570d39b
You can use this line to change the behaviour:
Depending on the task, it is sometimes beneficial to train the model, to also predict the correct value for the input features. You can also read a bit about this flag in this issue: #6
Feel free to reopen this issue if you have any questions.