A minimal example showcasing how autoscale Kubernetes workloads with any Datadog metric or custom query. You can refer to the Datadog documentation for more details. Also, having an understanding for how Horizontal Pod Autoscaling in Kubernetes works is recommended. Have a read through this piece of documentation.
- A Kubernetes cluster. Sandbox-like clusters such as Rancher Desktop or minikube are recommended for trying out this example.
- A Datadog account. You can get started with a free Datadog trial here.
- A Datadog Agent and Cluster Agent running in your cluster. Following the Helm-based installation steps will greatly simplify the task.
- Configure your Agent's deployment to support DogStatsD metrics as well as an external metrics server for the Datadog Cluster Agent. Once all of the steps above are completed, you should have a values file like so:
datadog:
apiKeyExistingSecret: datadog-secret
appKeyExistingSecret: datadog-secret
site: datadoghq.com
dogstatsd:
port: 8125
useHostPort: true
nonLocalTraffic: true
clusterAgent:
enabled: true
metricsProvider:
enabled: true
useDatadogMetrics: true
This file is provided, and this README will cover the Agent's deployment. That said, it's highly recommended that you review the documentation links above to understand the overall logic behind this example (and to understand how to do it in case you are not using Helm).
For starters, you'll need to deploy node Datadog Agents as well as a Cluster Agent. Here are the steps for doing so with Helm:
- Install Helm.
helm repo add datadog https://helm.datadoghq.com
helm repo update
kubectl create secret generic datadog-secret --from-literal api-key=$DD_API_KEY --from-literal app-key=$DD_APP_KEY
- Clone this repo (e.g.
git clone git@github.com:nsuarezcanton/datadog-hpa.git
) andcd
into it. helm install datadog-agent -f datadog-values.yaml --set targetSystem=linux datadog/datadog
.
Note that this assumes that $DD_API_KEY
and $DD_APP_KEY
are environment variables set in your current shell session.
Let's start by deploying a service that will submit custom metrics under the namespace datadog.examples.kubernetes_hpa.custom
:
kubectl apply -f custom-metrics-deployment.yaml
Let's then apply our nginx deployment. Though this is used as a "dummy" deployment, it will be scaled up (and down) based on the value of the custom metric that's being submitted by our custom-metrics-deployment.yaml
. To apply the nginx:
kubectl apply -f nginx-deployment.yaml
Now, the custom metrics service and the nginx pods have been deployed. The next step is to deploy a Custom Resource DatadogMetric
where the Cluster Agent will store the value from the metric query. Keep in mind that the Cluster Agent is acting as a metric server. To apply the CRD:
kubectl apply -f crd-datadog-metric.yaml
The final step is to wire up a HPA resource to scale the nginx-deployment
based on the value of the hpa-metric
. To apply the HPA:
kubectl apply -f hpa-datadog-metric.yaml