Reconciliation w/ Fugue backend
webert6 opened this issue · 3 comments
After building forecasts at scale using the Fugue backend with Spark, we also need to reconcile at scale. Is there anything in the works to enable this?
Quick example:
It would be great if the aggregate function below was compatible with a pyspark dataframe with the ability create the same three objects (Y_df, S_df, tags).
From which, we could pass Y_df (a spark dataframe) to forecast
Then, pass the forecasts, summing matrix, and tags to the reconciliation method.
Hey @webert6,
Thanks for using HierarchicalForecast, and thanks for the suggestion.
For the moment we don't support for spark distributed datasets, such features in the library would require substantial work.
Ideas that need to be considered for large data reconciliation:
- The aggregation constraints matrix contained in
S_df
has a shape that scales quadratically(n_aggregate+n_bottom) x (n_bottom)
. The step would be to make the associatedS
sparse. - Some optimal reconciliation strategies scale cubicly, due to matrix inversions. We would need to apply dimensionality reduction and/or conjugate gradient inversion.
- In some cases the BottomUp reconciliation could be applied in a completely parallelized form using map reduce, this idea would require additional development and a forecast distribution representation compatible with such approach.
On another note, the creation of S_df
and the hierarchical aggregation set uses numpy vectorized functions.
As long as the dataset fits in RAM it should be a fairly efficient method.