/softed

softed

Primary LanguageRMIT LicenseMIT

: Metrics for Soft Evaluation of Time Series Event Detection

SoftED metrics are a new set of metrics designed for soft evaluating event detection methods. They enable the evaluation of both detection accuracy and the degree to which their detections represent events. They improved event detection evaluation by associating events and their representative detections, incorporating temporal tolerance in event detcetion compared to the usual classification metrics. SoftED metrics contribute to detection evaluation and method selection.

This repository presents the implementation of SoftED metrics, as well as the datasets and experimental evaluation codes and results.

Folders are organized as follows:

  • softed_metrics: implementation of SoftED metrics in R
  • detection_data: Datasets adopted in experimental evaluation
  • experiment_code: Experimental evaluation codes
  • metrics_results: Results of detection evaluations based on the adopted metrics
  • presentations: Presentations regarding SoftED metrics
  • quali_survey: Qualitative analysis and survey results

This readme gives a brief overview and contains examples of usage of the SoftED metrics for evaluating a particular time series event detection method. Please refer to the following for more details about SoftED formalization and experimental evaluation:

  • Published paper: coming soon
  • Experimental evaluation: Wiki page

SoftED R implementation

The implementation of SoftED metrics are available in R and can be directly downloaded from softed_metrics.r.

The use of the metrics are independent from the adopted detection method. Based on the detection results of a given times series event detection method, a simple example of usage is given by:

soft_evaluate(events, reference, k=15)

Input:

  • events: A data.frame with at least one variables: time (events time/indexes)
  • reference: data.frame of the same length as the time series with two variables: time, event (boolean indicating true events)

Output:

  • calculated metrics values.

Example: soft evaluation of a detection method

library(EventDetectR)

# === Preparaing the data - GECCO Dataset ===
serie <- geccoIC2018Train[16500:18000,]
serie <- subset(serie, select=c(Time, Trueb))
reference <- subset(train, select=c(Time, EVENT))
names(reference) <- c("time","event")
# Loading detection method implementaion
source("https://raw.githubusercontent.com/cefet-rj-dal/softed/main/experiment_code/harbinger.r")

# Loading SoftED implementaion
source("https://raw.githubusercontent.com/cefet-rj-dal/softed/main/softed_metrics/softed_metrics.r")
#Detecting events
events <- evtdet.seminalChangePoint(serie, w=50,na.action=na.omit) #SCP
#Plotting detected events
print(evtplot(serie,events, reference))
#Hard evaluation (traditional classification metrics)
evaluate(events, reference)

#Soft evaluation
soft_evaluate(events, reference, k=15)

SoftED in the Harbinger package

The Harbinger R-package implements the Harbinger framework designed for integration and analysis of event detection methods. Recently, it has been extended with the addition of the SoftED metrics for allowing soft evaluation of the detection performance of times series event detection methods.

Jupyter Notebook with an example of usage: https://nbviewer.org/github/cefet-rj-dal/harbinger/blob/master/examples/har_softed.ipynb

SoftED experimental evaluation

SoftED metrics were submitted to an experimental evaluation to analyze their contribution against the traditional classification metrics (hard metrics) and the NAB score(Github), both being the current state-of-the-art in detection scoring.

The experimental settings, datasets and codes of the experimental evaluation of SoftED metrics are described in detail on the wiki page:

SoftED metrics: Experimental evaluation