/lolo

A random forest

Primary LanguageScalaApache License 2.0Apache-2.0

Lolo

Lolo

Travis

Lolo is a random forest-centered machine learning library in Scala.

The core of Lolo is bagging simple base learners, like decision trees, to imbue robust uncertainty estimates via jackknife-style variance estimators and explicit bias models.

Lolo supports:

  • continuous and categorical features
  • regression and classification trees
  • bagged learners to produce ensemble models, e.g. random forests
  • linear and ridge regression
  • regression leaf models, e.g. ridge regression trained on the leaf data
  • bias-corrected jackknife-after-bootstrap and infinitesimal jackknife variance estimates
  • bias models trained on out-of-bag residuals
  • discrete influence scores, which characterize the response of a prediction each training instance
  • model based feature importance
  • distance correlation
  • hyperparameter optimization via grid or random search
  • out-of-bag error estimates
  • parallel training via scala parallel collections

Usage

Lolo is on the central repository, and can be used by simply adding the following dependency block in your pom file:

<dependency>
    <groupId>io.citrine</groupId>
    <artifactId>lolo</artifactId>
    <version>0.2.11</version>
</dependency>

Lolo provides higher level wrappers for common learner combinations. For example, you can use Random Forest with:

import io.citrine.lolo.learners.RandomForest
val trainingData: Seq[(Vector[Any], Any)] = features.zip(labels)
val model = new RandomForest().train(trainingData).getModel()
val predictions: Seq[Any] = model.transform(testInputs).getExpected()

Performance

Lolo prioritizes functionality over performance, but it is still quite fast. In its random forest use case, the complexity scales as:

Time complexity Training rows Features Trees
train O(n log n) O(n) O(n)
getLoss O(n log n) O(n) O(n)
getExpected O(log n) O(1) O(n)
getUncertainty O(n) O(1) O(n)

On an Ivy Bridge test platform, the (1024 row, 1024 tree, 8 feature) performance test took 1.4 sec to train and 2.3 ms per prediction with uncertainty.

Contributing

We welcome bug reports, feature requests, and pull requests. Pull requests should be made following the gitflow workflow. As contributions expand, we'll put more information here.

Authors

Related projects

  • randomForestCI is an R-based implementation of jackknife variance estimates by S. Wager