creme
is a library for online machine learning, also known as incremental learning. Online learning is a machine learning regime where a model learns one observation at a time. This is in contrast to batch learning where all the data is processed in one go. Incremental learning is desirable when the data is too big to fit in memory, or simply when it isn't available all at once. creme
's API is heavily inspired from that of scikit-learn, enough so that users who are familiar with it should feel right at home.
- Documentation
- Issue tracker
- Package releases
- Change history
- PyData Amsterdam 2019 presentation (slides, video incoming)
- Blog post from pyimagesearch for image classification
☝️ creme
is tested with Python 3.6 and above.
creme
mostly relies on Python's standard library. Sometimes it relies on numpy
, scipy
, and scikit-learn
so as not to reinvent the wheel. creme
can simply be installed with pip
.
pip install creme
In the following snippet we'll be fitting an online logistic regression. The weights of the model will be optimized with the AdaGrad algorithm. We'll scale the data so that each variable has a mean of 0 and a standard deviation of 1. The standard scaling and the logistic regression are combined into a pipeline using the |
operator. We'll be using the stream.iter_sklearn_dataset
function for streaming over the Wisconsin breast cancer dataset. We'll measure the F1-score using progressive validation.
>>> from creme import compose
>>> from creme import linear_model
>>> from creme import metrics
>>> from creme import model_selection
>>> from creme import optim
>>> from creme import preprocessing
>>> from creme import stream
>>> from sklearn import datasets
>>> X_y = stream.iter_sklearn_dataset(
... dataset=datasets.load_breast_cancer(),
... shuffle=True,
... random_state=42
... )
>>> scaler = preprocessing.StandardScaler()
>>> lin_reg = linear_model.LogisticRegression(optimizer=optim.AdaGrad())
>>> model = scaler | lin_reg
>>> metric = metrics.F1()
>>> for x, y in X_y:
... y_pred = model.predict_one(x)
... model = model.fit_one(x, y)
... metric = metric.update(y, y_pred)
>>> metric
F1: 0.97191
- scikit-learn: Some of it's estimators have a
partial_fit
method which allows them to update themselves with new observations. However, online learning isn't treated as a first class citizen, which can make things awkward. You should definitely use scikit-learn if your data fits in memory and that you can afford retraining your model from scratch every time new data is available. - Vowpal Wabbit: VW is probably the fastest out-of-core learning system available. At it's core it implements a state-of-the-art adaptive gradient descent algorithm with many tricks. It also has some mechanisms for doing active learning and using bandits. However it isn't a "true" online learning system as it assumes the data is available in a file and can be looped over multiple times. Also it is somewhat difficult to grok for newcomers.
- LIBOL: This is a good library written by academics with some great documentation. It's written in C++ and seems to be pretty fast. However it only focuses on the learning aspect of online learning, not on other mundane yet useful tasks such as feature extraction and preprocessing. Moreover it hasn't been updated for a few years.
- Spark Streaming: This is an extension of Apache Spark which caters to big data practitioners. It processes data in mini-batches instead of actually doing real streaming operations. It also has some compatibility with the MLlib for implementing online learning algorithms, such as streaming linear regression and streaming k-means. However it is a somewhat overwhelming solution which might be a bit overkill for certain use cases.
- TensorFlow: Deep learning systems are in some sense online learning systems because they use online gradient descent. However, popular libraries are mostly attuned to batch situations. Because frameworks such as Keras and PyTorch are so popular and very well backed, there is no real point in implementing neural networks in creme. Additionally, for a lot of problems neural networks might not be the right tool, and you might want to use a simple logistic regression or a decision tree (for which online algorithms exist).
Feel free to open an issue if you feel like other solutions are worth mentioning.
Like many subfields of machine learning, online learning is far from being an exact science and so there is still a lot to do. Feel free to contribute in any way you like, we're always open to new ideas and approaches. If you want to contribute to the code base please check out the CONTRIBUTING.md
file. Also take a look at the issue tracker and see if anything takes your fancy.
Last but not least you are more than welcome to share with us how you're using creme
or online learning in general! We believe that online learning solves a lot of pain points in practice and we would love to share experiences.
This project follows the all-contributors specification. Contributions of any kind are welcome!
Max Halford 📆 💻 | AdilZouitine 💻 | Raphael Sourty 💻 | Geoffrey Bolmier 💻 | vincent d warmerdam 💻 | VaysseRobin 💻 | Lygon Bowen-West 💻 |
Florent Le Gac 💻 |
See the license file.