[Q] Allow error metrics to compare models without anomaly score?
breznak opened this issue · 0 comments
breznak commented
- on a "strategic" level
- can we include metrics to compare models (additionally) by sth else than anomaly score, comapre by the metric?
- implementation:
- I'd like to add MSE, R2 metrics
- (opt) extend API to enforce algorithms to give also prediction (appart from anomalyScore), so we can compute these metrics for all algs?
E(current, predicted) -> Real
Justification:
- for some (most?) time-series algorithms provide prediction for
T+1
, but computing anomaly is sometimes impossible/difficult. See #347 - papers more typically provide these error scores for timeseries than anomaly metric. Therefore this repo can serve easily as a benchmark/experimentation with different algorithms & datasets.
- interesting hypothesis: is anomaly score metric the ideal metric for time-series?
- relatively simple change would allow new comparisons & functionality