darts is a Python library for easy manipulation and forecasting of time series.
It contains a variety of models, from classics such as ARIMA to deep neural networks.
The models can all be used in the same way, using fit()
and predict()
functions,
similar to scikit-learn. The library also makes it easy to backtest models,
combine the predictions of several models, and take external data into account.
Darts supports both univariate and multivariate time series and models.
The ML-based models can be trained on potentially large datasets containing multiple time
series, and some of the models offer a rich support for probabilistic forecasting.
- Training Models on Multiple Time Series
- Using Past and Future Covariates
- Temporal Convolutional Networks and Forecasting
- Probabilistic Forecasting
We recommend to first setup a clean Python environment for your project with at least Python 3.7 using your favorite tool (conda, venv, virtualenv with or without virtualenvwrapper).
Once your environment is set up you can install darts using pip:
pip install darts
For more details you can refer to our installation instructions.
Create a TimeSeries
object from a Pandas DataFrame, and split it in train/validation series:
import pandas as pd
from darts import TimeSeries
# Read a pandas DataFrame
df = pd.read_csv('AirPassengers.csv', delimiter=",")
# Create a TimeSeries, specifying the time and value columns
series = TimeSeries.from_dataframe(df, 'Month', '#Passengers')
# Set aside the last 36 months as a validation series
train, val = series[:-36], series[-36:]
Fit an exponential smoothing model, and make a (probabilistic) prediction over the validation series' duration:
from darts.models import ExponentialSmoothing
model = ExponentialSmoothing()
model.fit(train)
prediction = model.predict(len(val), num_samples=1000)
Plot the median, 5th and 95th percentiles:
import matplotlib.pyplot as plt
series.plot()
prediction.plot(label='forecast', low_quantile=0.05, high_quantile=0.95)
plt.legend()
- Forecasting Models: A large collection of forecasting models; from statistical models (such as ARIMA) to deep learning models (such as N-BEATS). See table of models below.
- Data processing: Tools to easily apply (and revert) common transformations on time series data (scaling, boxcox, ...)
- Metrics: A variety of metrics for evaluating time series' goodness of fit; from R2-scores to Mean Absolute Scaled Error.
- Backtesting: Utilities for simulating historical forecasts, using moving time windows.
- Regression Models: Possibility to predict a time series from lagged versions of itself and of some external covariate series, using arbitrary regression models (e.g. scikit-learn models).
- Multiple series training: All machine learning based models (incl.\ all neural networks) support being trained on multiple series.
- Past and Future Covariates support: Some models support past-observed and/or future-known covariate time series as inputs for producing forecasts.
- Multivariate Support: Tools to create, manipulate and forecast multivariate time series.
- Probabilistic Support:
TimeSeries
objects can (optionally) represent stochastic time series; this can for instance be used to get confidence intervals, and several models support different flavours of probabilistic forecasting. - PyTorch Lightning Support: All deep learning models are implemented using PyTorch Lightning, supporting among other things custom callbacks, GPUs/TPUs training and custom trainers.
- Filtering Models: Darts offers three filtering models:
KalmanFilter
,GaussianProcessFilter
, andMovingAverage
, which allow to filter time series, and in some cases obtain probabilistic inferences of the underlying states/values.
Here's a breakdown of the forecasting models currently implemented in Darts. We are constantly working on bringing more models and features.
Model | Univariate | Multivariate | Probabilistic | Multiple-series training | Past-observed covariates support | Future-known covariates support | Reference |
---|---|---|---|---|---|---|---|
ARIMA |
✅ | ✅ | ✅ | ||||
VARIMA |
✅ | ✅ | ✅ | ||||
AutoARIMA |
✅ | ✅ | |||||
StatsForecastAutoARIMA (faster AutoARIMA) |
✅ | ✅ | ✅ | statsforecast | |||
ExponentialSmoothing |
✅ | ✅ | |||||
BATS and TBATS |
✅ | ✅ | TBATS paper | ||||
Theta and FourTheta |
✅ | Theta & 4 Theta | |||||
Prophet |
✅ | ✅ | ✅ | Prophet repo | |||
FFT (Fast Fourier Transform) |
✅ | ||||||
KalmanForecaster using the Kalman filter and N4SID for system identification |
✅ | ✅ | ✅ | ✅ | N4SID paper | ||
Croston method |
✅ | ||||||
RegressionModel ; generic wrapper around any sklearn regression model |
✅ | ✅ | ✅ | ✅ | ✅ | ||
RandomForest |
✅ | ✅ | ✅ | ✅ | ✅ | ||
LinearRegressionModel |
✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
LightGBMModel |
✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |
RNNModel (incl. LSTM and GRU); equivalent to DeepAR in its probabilistic version |
✅ | ✅ | ✅ | ✅ | ✅ | DeepAR paper | |
BlockRNNModel (incl. LSTM and GRU) |
✅ | ✅ | ✅ | ✅ | ✅ | ||
NBEATSModel |
✅ | ✅ | ✅ | ✅ | ✅ | N-BEATS paper | |
NHiTS |
✅ | ✅ | ✅ | ✅ | ✅ | N-HiTS paper | |
TCNModel |
✅ | ✅ | ✅ | ✅ | ✅ | TCN paper, DeepTCN paper, blog post | |
TransformerModel |
✅ | ✅ | ✅ | ✅ | ✅ | ||
TFTModel (Temporal Fusion Transformer) |
✅ | ✅ | ✅ | ✅ | ✅ | ✅ | TFT paper, PyTorch Forecasting |
Naive Baselines | ✅ |
Anyone is welcome to join our Discord server
Gitter room to
ask questions, make proposals, discuss use-cases, and more. If you spot a bug or
or have suggestions, GitHub issues are also welcome.
If what you want to tell us is not suitable for Discord or Github, feel free to send us an email at darts@unit8.co for darts related matters or info@unit8.co for any other inquiries.
The development is ongoing, and we welcome suggestions, pull requests and issues on GitHub. All contributors will be acknowledged on the change log page.
Before working on a contribution (a new feature or a fix), check our contribution guidelines.
If you are using Darts in your scientific work, we would appreciate citations to the following paper.
Darts: User-Friendly Modern Machine Learning for Time Series
Bibtex entry:
@misc{herzen2021darts,
title={Darts: User-Friendly Modern Machine Learning for Time Series},
author={Julien Herzen and Francesco Lässig and Samuele Giuliano Piazzetta and Thomas Neuer and Léo Tafti and Guillaume Raille and Tomas Van Pottelbergh and Marek Pasieka and Andrzej Skrodzki and Nicolas Huguenin and Maxime Dumonal and Jan Kościsz and Dennis Bader and Frédérick Gusset and Mounir Benheddi and Camila Williamson and Michal Kosinski and Matej Petrik and Gaël Grosch},
year={2021},
eprint={2110.03224},
archivePrefix={arXiv},
primaryClass={cs.LG}
}