Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API
for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top
,
making it easier to debug autoscaling pipelines.
Metrics Server is not meant for non-autoscaling purposes. For example, don't use it to forward metrics to monitoring solutions, or as a source of monitoring solution metrics.
Metrics Server offers:
- A single deployment that works on most clusters (see Requirements)
- Scalable support up to 5,000 node clusters
- Resource efficiency: Metrics Server uses 0.5m core of CPU and 4 MB of memory per node
You can use Metrics Server for:
- CPU/Memory based horizontal autoscaling (learn more about Horizontal Pod Autoscaler)
- Automatically adjusting/suggesting resources needed by containers (learn more about Vertical Pod Autoscaler)
Don't use Metrics Server when you need:
- Non-Kubernetes clusters
- An accurate source of resource usage metrics
- Horizontal autoscaling based on other resources then CPU/Memory
For unsupported use cases, check out full monitoring solutions like Prometheus.
Metrics Server has specific requirements for cluster and network configuration. These requirements aren't the default for all cluster distributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:
- Metrics Server must be reachable from kube-apiserver
- The kube-apiserver must be correctly configured to enable an aggregation layer
- Nodes must have kubelet authorization configured to match Metrics Server configuration
- Container runtime must implement a container metrics RPCs
Metrics Server installation manifests are uploaded with GitHub release.
They are available as components.yaml
asset on Metrics Server releases making them installable via url:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
WARNING: You should no longer use manifests from master
branch (previously available in deploy/kubernetes
directory).
They are now meant solely for development.
Compatibility matrix:
Metrics Server | Metrics API group/version | Supported Kubernetes version |
---|---|---|
0.3.x | metrics.k8s.io/v1beta1 |
1.8+ |
Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:
--kubelet-preferred-address-types
- The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])--kubelet-insecure-tls
- Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.--requestheader-client-ca-file
- Specify a root certificate bundle for verifying client certificates on incoming requests.
You can get a full list of Metrics Server configuration flags by running:
docker run --rm k8s.gcr.io/metrics-server/metrics-server:v0.3.7 --help
This Helm chart can deploy the metric-server service in your cluster.
Note: This Helm chart isn't supported by Metrics Server maintainers.
Metrics Server is a component in the core metrics pipeline described in Kubernetes monitoring architecture.
For more information, see:
Before posting it an issue, first checkout Frequently Asked Questions.
Learn how to engage with the Kubernetes community on the community page.
You can reach the maintainers of this project at:
This project is maintained by SIG Instrumentation
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.