Cerebral is a Kubernetes cluster autoscaler with pluggable metrics backends and scaling engines.
Overview
Cerebral is a provider agnostic tool for increasing or decreasing the size of pools of nodes in your Kubernetes cluster in response to alerts generated by user-defined policies. These policies reference pluggable and configurable metrics backends, e.g. Prometheus, for gathering metrics to make autoscaling decisions on.
Why Autoscaling?
Automatically increasing the number of nodes is important in order to meet resource demand, while decreasing the number is helpful in controlling cost.
Manually scaling nodes in a Kubernetes cluster is not feasible given the largely dynamic nature of web infrastructure; thus, automation is needed to assist operators in these tasks. With the increased importance placed on monitoring and observability in modern infrastructure, operators should be able to easily take action on the metrics they are collecting.
How Cerebral Works
Cerebral is simple at its core: it polls a MetricsBackend
and triggers alerts if thresholds defined in any AutoscalingPolicy
associated with any AutoscalingGroup
are breached.
These alerts may result in a scale request to the AutoscalingEngine
.
A MetricsBackend
, AutoscalingPolicy
, AutoscalingGroup
, and AutoscalingEngine
are all defined by Custom Resource Definitions (CRDs).
An AutoscalingGroup
, for example, is just a group of Kubernetes nodes that can be selected using some label selector.
Pluggable Architecture
The most powerful feature of Cerebral is the ability to easily plug in new metrics backend and autoscaling implementations.
Metrics Backend
Support for a different MetricsBackend
can be added by implementing the metrics backend interface.
In addition to traditional metrics backends such as the currently available Prometheus integration, there are countless possible use-cases for custom, application-specific metrics backends. For example, autoscaling could be performed based on the current depth of some application queue.
The currently available metrics backends include:
Autoscaling Engine
Support for a different AutoscalingEngine
can be added by implementing the engine interface.
Because an AutoscalingGroup
is defined by a label selector, the provider (or some other entity) must be able to label nodes when they are added.
The currently available engines include:
Project Status
This project is in alpha. There may be breaking changes made as we continue to expand the project and integrated user feedback.
Currently, the project has support for several metrics backends and engines. A lot more is to come - please see the GitHub issues for a roadmap, and feel free to open your own issue if a feature you'd like to see isn't already in the roadmap!
Out-of-Tree Support
Currently, all pluggable components, namely the MetricsBackend and AutoscalingEngine, are required to be implemented in-tree. There are a number of advantages to supporting out-of-tree components:
- Users can leverage their own implementations without waiting for official support from this project
- Components can be versioned, updated, and deployed independently from one another
Supporting out-of-tree components is on our roadmap (see e.g. #45) but not yet implemented.
Alternatives
Another tool for autoscaling is the Kubernetes Autoscaler. It is a prerequisite that the integrated providers support Autoscaling Groups (ASGs), a feature which many cloud providers do not have.
Additionally, the method by which scaling occurs is naïve; often triggering events too late to be rendered useful. Cerebral takes a more generic, flexible, and powerful approach to autoscaling by integrating with existing metric backends as input, and in turn, triggering pluggable actions in response to scaling events.
Learn More
Please refer to our documentation for more information on building, configuring, and running Cerebral.
Contributing
Thank you for your interest in this project and for your interest in contributing! Feel free to open issues for feature requests, bugs, or even just questions - we love feedback and want to hear from you.
Pull requests are also always welcome! However, if the feature you're considering adding is fairly large in scope, please consider opening an issue for discussion first. See CONTRIBUTING.md for more details.