tidyverts/fasster

Forecast with shifted data but fixed parameters

Opened this issue · 0 comments

Hello,

I am trying to benchmark faster against a few other custom seq-2-seq models. The alternative models I'm using are computationally expensive so I'm using a 80-20 split to train and evaluate my models. This means that for the final 20% of the data I'm generating forecasts for a 6 month horizon, shifting the input data 1 to the right, and repeating without updating parameters. Currently to benchmark against fasster, I have something like the following (assuming the .init in stretch_tsibble specifies the training set).

library(fpp3)
library(fasster)

beer <- aus_production %>%
  select(Beer) %>%
  stretch_tsibble(.init = 12, .step=1)
beer

fc <- beer %>%
  model(ETS(Beer),
               fasster(Beer~1)) %>%
  forecast(h = "1 year") %>%
  group_by(.id, .model) %>%
  mutate(h = row_number()) %>%
  ungroup()

fc %>%
  accuracy(aus_production, by=c("h",".model"))

This code estimates a separate fasster model for each .id. Refitting across windows makes the seq-to-seq and fasster models difficult to compare. Furthermore, the new_data argument in forecast() seems to only be useful for exogenous features and not for autoregressive lags/ma. Any advice on how to use a model with fixed parameters to ensure consistency in validation approaches? If not, any plans to fully decouple training and forecasting?