GPBoost is a software library for combining tree-boosting with Gaussian process and mixed effects models. It also allows for independently doing tree-boosting as well as inference and prediction for Gaussian process and mixed effects models.
The GPBoost library is written in C++ and it has a C API. There exist both a Python package and an R package.
For more information, you may want to have a look at:
- The GPBoost R and Python demo illustrating how GPBoost can be used in R and Python
- The Python package and R package with installation instructions for the Python and R packages
- The companion article Sigrist (2020) or this blog post on how to combine tree-boosting with mixed effects models
- Additional Python examples and R examples
- Main parameters presenting the most important parameters / settings for using the GPBoost library
- Parameters an exhaustive list of all possible parametes and customizations for the tree-boosting part
- The CLI installation guide explaining how to install the command line interface (CLI) version
Both tree-boosting and Gaussian processes are techniques that achieve state-of-the-art predictive accuracy. Besides this, tree-boosting has the following advantages:
- Automatic modeling of non-linearities, discontinuities, and complex high-order interactions
- Robust to outliers in and multicollinearity among predictor variables
- Scale-invariance to monotone transformations of the predictor variables
- Automatic handling of missing values in predictor variables
Gaussian process and mixed effects models have the following advantages:
- Probabilistic predictions which allows for uncertainty quantification
- Modeling of dependency which, among other things, can allow for more efficient learning of the fixed effects / regression function
For the GPBoost algorithm, it is assumed that the response variable (label) is the sum of a non-linear mean function and so-called random effects. The random effects can consists of
- Gaussian processes (including random coefficient processes)
- Grouped random effects (including nested, crossed, and random coefficient effects)
- A sum of the above
The model is trained using the GPBoost algorithm, where training means learning the covariance parameters of the random effects and the mean function F(X) using a tree ensemble. In brief, the GPBoost algorithm is a boosting algorithm that iteratively learns the covariance parameters and adds a tree to the ensemble of trees using a gradient and/or a Newton boosting step. In the GPBoost library, covariance parameters can be learned using (Nesterov accelerated) gradient descent or Fisher scoring. Further, trees are learned using the LightGBM library. See Sigrist (2020) for more details.
- See the GitHub releases page
- 04/06/2020 : First release of GPBoost
- Add possibility to save gp_model to file
- Add Python tests for gp_model (see corresponding R tests)
- Setting up Travis CI for GPBoost
- Add GPU support for Gaussian processes
- Add a spatio-temporal Gaussian process model (e.g. a separable one)
- Add possibility to predict latent Gaussian processes and random effects (e.g. random coefficients)
- Add some form of safeguard agains too large steps when applying Nesterov acceleration for covariance parameter estimation
Sigrist Fabio. "Gaussian Process Boosting". Preprint (2020).
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu. "LightGBM: A Highly Efficient Gradient Boosting Decision Tree". Advances in Neural Information Processing Systems 30 (NIPS 2017), pp. 3149-3157.
This project is licensed under the terms of the Apache License 2.0. See LICENSE for additional details.