/kube-state-metrics

Add-on agent to generate and expose cluster-level metrics.

Primary LanguageGoApache License 2.0Apache-2.0

Overview

kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. (See examples in the Metrics section below.) It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods.

The metrics are exported through the Prometheus golang client on the HTTP endpoint /metrics on the listening port (default 80). They are served either as plaintext or protobuf depending on the Accept header. They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. You can also open /metrics in a browser to see the raw metrics.

Requires Kubernetes 1.2+

Metrics

There are many more metrics we could report, but this first pass is focused on those that could be used for actionable alerts. Please contribute PR's for additional metrics!

WARNING: THESE METRIC/TAG NAMES ARE UNSTABLE AND MAY CHANGE IN A FUTURE RELEASE.

Node Metrics

Metric name Metric type Labels/tags
kube_node_info Gauge node=<node-address>
kernel_version=<kernel-version>
os_image=<os-image-name>
container_runtime_version=<container-runtime-and-version-combination>
kubelet_version=<kubelet-version>
kubeproxy_version=<kubeproxy-version>
kube_node_status_ready Gauge node=<node-address>
condition=<true|false|unknown>
kube_node_status_out_of_disk Gauge node=<node-address>
condition=<true|false|unknown>
kube_node_status_phase Gauge node=<node-address>
phase=<Pending|Running|Terminated>
kube_node_status_capacity_cpu_cores Gauge node=<node-address>
kube_node_status_capacity_memory_bytes Gauge node=<node-address>
kube_node_status_capacity_pods Gauge node=<node-address>
kube_node_status_allocateable_cpu_cores Gauge node=<node-address>
kube_node_status_allocateable_memory_bytes Gauge node=<node-address>
kube_node_status_allocateable_pods Gauge node=<node-address>

Deployment Metrics

Metric name Metric type Labels/tags
kube_deployment_status_replicas Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_status_replicas_available Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_status_replicas_unavailable Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_status_replicas_updated Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_status_replicas_observed_generation Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_spec_replicas Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_spec_paused Gauge deployment=<deployment-name>
namespace=<deployment-namespace>
kube_deployment_metadata_generation Gauge deployment=<deployment-name>
namespace=<deployment-namespace>

Pod Metrics

Metric name Metric type Labels/tags
kube_pod_info Gauge pod=<pod-name>
namespace=<pod-namespace>
host_ip=<host-ip>
pod_ip=<pod-ip>
start_time=<date-time since kubelet acknowledged pod>
kube_pod_status_phase Gauge pod=<pod-name>
namespace=<pod-namespace>
phase=<Pending|Running|Succeeded|Failed|Unknown>
kube_pod_status_ready Gauge pod=<pod-name>
namespace=<pod-namespace>
condition=<true|false|unknown>
kube_pod_status_scheduled Gauge pod=<pod-name>
namespace=<pod-namespace>
condition=<true|false|unknown>
kube_pod_container_info Gauge container=<container-name>
pod=<pod-name>
namespace=<pod-namespace>
image=<image-name>
image_id=<image-id>
container_id=<containerid>
kube_pod_container_status_waiting Gauge container=<container-name>
pod=<pod-name>
namespace=<pod-namespace>
kube_pod_container_status_running Gauge container=<container-name>
pod=<pod-name>
namespace=<pod-namespace>
kube_pod_container_status_terminated Gauge container=<container-name>
pod=<pod-name>
namespace=<pod-namespace>
kube_pod_container_status_ready Gauge container=<container-name>
pod=<pod-name>
namespace=<pod-namespace>
kube_pod_container_status_restarts Counter container=<container-name>
namespace=<pod-namespace>
pod=<pod-name>

kube-state-metrics vs. Heapster

Heapster is a project which fetches metrics (such as CPU and memory utilization) from the Kubernetes API server and nodes and sends them to various time-series backends such as InfluxDB or Google Cloud Monitoring.

While Heapster's focus is on forwarding metrics already generated by Kubernetes, kube-state-metrics is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). The reason not to extend Heapster with kube-state-metrics' abilities is because the concerns are fundamentally different - while Heapster only needs to fetch, format and forward metrics that already exist, kube-state-metrics holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it but has no responsibility for exporting its metrics anywhere.

In other words, kube-state-metrics itself is designed to be another source for Heapster (although this is not currently the case).

Additionally, some monitoring systems such as Prometheus do not use Heapster for metric collection at all and instead implement their own. Having kube-state-metrics as a separate project enables access to these metrics from those monitoring systems.

Building the Docker container

Simple run the following command in this root folder, which will create a self-contained, statically-linked binary and build a Docker image:

make container

Usage

Simply build and run kube-state-metrics inside a Kubernetes pod which has a service account token that has read-only access to the Kubernetes cluster.

Kubernetes Deployment

To deploy this project, you can simply run kubectl apply -f kubernetes and a Kubernetes service and deployment will be created. The service already has a prometheus.io/scrape: 'true' annotation and if you added the recommended Prometheus service-endpoint scraping configuration, Prometheus will pick it up automatically and you can start using the generated metrics right away.

Development

When developing, test a metric dump against your local Kubernetes cluster by running:

go install
kube-state-metrics --apiserver=<APISERVER-HERE> --in-cluster=false --port=8080

Then curl the metrics endpoint

curl localhost:8080/metrics