KimMeen/Time-LLM

Question about "dec_out"

Closed this issue · 3 comments

dec_out = self.output_projection(dec_out[:, :, :, -self.patch_nums:])
dec_out = dec_out.permute(0, 2, 1).contiguous()

Why just cut "-self.patch_nums:" in the last dimension of dec_out?
Thanks for your attention!

Thank you for your attention to our work. You can refer to this issue #59.

Thank you for your reply!
I was more focused on model robustness before, so I may not be familiar with time series prediction.

  1. Why is it that under the setting of --features M, when reading data, the index selects data from this length, that is, it is possible to read HUFL data to predict HUFL data, and it is also possible to read OT data to predict OT data?
    def __len__(self):
        return (len(self.data_x) - self.seq_len - self.pred_len + 1) * self.enc_in
  1. In addition, I think that in M ​​mode, for ETTh1, the input should be 7-dimensional features, and the output should also be 7-dimensional features; in MS mode, for ETTh1, the input should be 7-dimensional features, and the output should be OT (single dimension). Is this understanding correct?

Thank you for your reply! I was more focused on model robustness before, so I may not be familiar with time series prediction.

  1. Why is it that under the setting of --features M, when reading data, the index selects data from this length, that is, it is possible to read HUFL data to predict HUFL data, and it is also possible to read OT data to predict OT data?
    def __len__(self):
        return (len(self.data_x) - self.seq_len - self.pred_len + 1) * self.enc_in
  1. In addition, I think that in M ​​mode, for ETTh1, the input should be 7-dimensional features, and the output should also be 7-dimensional features; in MS mode, for ETTh1, the input should be 7-dimensional features, and the output should be OT (single dimension). Is this understanding correct?

Your understanding is correct. However, in order to achieve efficient model transferability, TimeLLM adopts a channel-independent strategy, treating all different channel time series variables as univariate time series. If you are interested in multivariate time series, you can check out my work on "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting".