A scikit-learn compatible Python toolbox for learning with time series and panel data. Eventually, we would like to support:
- Time series classification and regression,
- Classical forecasting,
- Supervised/panel forecasting,
- Time series segmentation,
- Time-to-event and event risk modelling,
- Unsupervised tasks such as motif discovery, anomaly detection and diagnostic visualization,
- On-line and streaming tasks, e.g. in variation of the above.
For deep learning methods, we have a separate extension package: sktime-dl.
The package is under active development. Development takes place in the sktime repository on Github.
Currently, modular modelling workflows for forecasting and supervised learning with time series have been implemented. As next steps, we will move to supervised forecasting and integration of a modified pysf interface and extensions to the existing frameworks.
The package is available via PyPI using:
pip install sktime
But note that the package is actively being developed and currently not feature stable.
To install the development version, follow these steps:
- Download the repository:
git clone https://github.com/alan-turing-institute/sktime.git
- Move into the root directory of the repository:
cd sktime
- Switch to development branch:
git checkout dev
- Make sure your local version is up-to-date:
git pull
- Install package:
pip install .
You currently may have to install numpy
and Cython
first using: pip install numpy
and pip install Cython
.
The low-level interface extends the standard scikit-learn API to handle time series and panel data. Currently, the package implements:
- Various state-of-the-art approaches to supervised learning with time series features,
- Transformation of time series, including series-to-series transforms (e.g. Fourier transform), series-to-primitives transforms aka feature extractors, (e.g. mean, variance), sub-divided into fittables (on table) and row-wise applicates,
- Pipelining, allowing to chain multiple transformers with a final estimator,
- Meta-learning strategies including tuning and ensembling, accepting pipelines as the base estimator,
- Off-shelf composite strategies, such as a fully customisable random forest for time-series classification, with interval segmentation and feature extraction,
- Classical forecasting algorithms and reduction strategies to solve forecasting tasks with time series regression algorithms.
There are numerous different time series data related learning tasks, for example
- Time series classification and regression,
- Classical forecasting,
- Supervised/panel forecasting,
- Time series segmentation.
The sktime high-level interface aims to create a unified interface for these different learning tasks (partially inspired by the APIs of mlr and openML) through the following two objects:
Task
object that encapsulates meta-data from a dataset and the necessary information about the particular supervised learning task, e.g. the instructions on how to derive the target/labels for classification from the data,Strategy
objects that wrap low-level estimators and allows to usefit
andpredict
methods using data and a task object.
The full API documentation and an introduction can be found here. Tutorial notebooks for currently stable functionality are in the examples folder.
- Functionality for the advanced time series tasks. For (supervised) forecasting, integration of a modified pysf interface. For time-to-event and event risk modell, integration of an adapted pysf interface.
- Extension of high-level interface to classical and supervised/panel forecasting, to include reduction strategies in which forecasting or supervised forecasting tasks are reduced to tasks that can be solved with classical supervised learning algorithms or time series classification/regression,
- Integration of algorithms for classical forecasting (e.g. ARIMA), deep learning strategies, and third-party feature extraction tools,
- Design and implementation of specialised data-container for efficient handling of time series/panel data in a supervised learning workflow and separation of time series meta-data, re-utilising existing data-containers whenever possible,
- Automated benchmarking functionality including orchestration of experiments and post-hoc evaluation methods, based on the mlaut design.
Former and current active contributors are as follows.
Project management: Jason Lines (@jasonlines), Franz Király (@fkiraly)
Design: Anthony Bagnall (@TonyBagnall), Sajaysurya Ganesh (@sajaysurya), Jason Lines (@jasonlines), Viktor Kazakov (@viktorkaz), Franz Király (@fkiraly), Markus Löning (@mloning)
Coding: Sajaysurya Ganesh (@sajaysurya), Anthony Bagnall (@TonyBagnall), Jason Lines (@jasonlines), George Oastler (@goastler), Viktor Kazakov (@viktorkaz), Markus Löning (@mloning)
We are actively looking for contributors. Please contact @fkiraly or @jasonlines for volunteering or information on paid opportunities, or simply raise an issue in the tracker.