[kube-prometheus-stack] Relabel or drop labels when scraping metrics
tyagian opened this issue · 4 comments
Describe the bug a clear and concise description of what the bug is.
I did relabel and dropped labels when writing to remote storage and it worked.
But I am not getting where to make changes in metrics labels; relabel or drop labels when scraping.
When I do these changes during scraping, I am expecting to see updates in Prometheus UI but I am not getting, where exactly I should be making changes in Kube-prometheus values.yaml file to achieve this?
I tried https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L621
but it did not work. Any inputs on this will be appreciated.
I am using chart v45.8.1 but don't think it matters. I am upgrading this helm chart also in next few weeks but want to this relabel sooner now.
What's your helm version?
v3.10.0
What's your kubectl version?
1.22.0
Which chart?
kube-prometheus-stack
What's the chart version?
v45.8.1
What happened?
No response
What you expected to happen?
No response
How to reproduce it?
No response
Enter the changed values of values.yaml?
No response
Enter the command that you execute and failing/misfunctioning.
I applied changes via ArgoCD
Anything else we need to know?
No response
But I am not getting where to make changes in metrics labels;
This really depends on the purpose of the label manipulation.
-
If you wish to change labels attached to the scrape targets/endpoints, i.e. to really add/change/remove their mostly identifying and configuration information like node, team, purpose, location, cluster, etc., you'll apply Prometheus'
scrape_configs.relabel_configs
which corresponds to ServiceMonitor'sspec.endpoints.relabelings
(mostly chart'sserviceMonitor.relabelings
). These changes take place before Prometheus scrapes the targets, this is why you can set e.g.__address__
used then by Prometheus for scraping. Results of the actions if successful would then be seen in the UI underneath Status/Targets as it is the targets that are being affected. The labels will be attached to all time series coming out of those targets. Having been retrieved from the targets, you can further manipulate them through -
Prometheus'
scrape_configs.metric_relabel_configs
and ServiceMonitors'spec.endpoints.metricRelabelings
(mostly chart'sserviceMonitor.metricRelabelings
). In this way, one can apply changes to time series before they get ingested, e.g. common dropping a label, dropping a metric, dropping all metrics from a target, etc. It is often very useful to remove unused metrics or remove labels when cardinality is becoming high.
Most of the ServiceMonitors already include comments on relabelings
as well as metricRelabelings
in the values file, some are already configured. E.g. kubelet.serviceMonitor.cAdvisorRelabelings
(L1380) setting metrics_path (endpoint's label) and kubelet.serviceMonitor.cAdvisorMetricRelabelings
(L1325) dropping selected metrics.
When I do these changes during scraping, I am expecting to see updates in Prometheus UI
I am not sure which updates in the UI you mean. As said above, targets will show their label set including changes made by relabelings
if successful and the resulting label set stays there unless the relabelings for that target change.
I tried https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L621 but it did not work.
That would be case 2 without any effect on the target's labels. If you wish to rewrite the target's label set, you'll need to use relabelings
just below that field.
@zeritti Thank you for your detailed response.
When you say,
Most of the ServiceMonitors already include comments on relabelings as well as metricRelabelings in the values file, some are already configured
I understood you are suggesting to add Relabeling configuration to application ServiceMonitor but can I instead add here in kube-prometheus-stack helm chart or it's ServiceMonitor so I can relabel or drop labels any label from multiple applications with a single configuration?
Example:
If I am running multiple internally developed microservices deployed via helm chart, to drop or rename their labels, can I add configuration in kube-prometheus-stack helm chart values.yaml , if labels are common and if this will make changes on multiple application metrics with a single configuration? I am using service Discovery to get metrics from those applications and they are also running on Kubernetes.
Most of the ServiceMonitors already include comments on relabelings as well as metricRelabelings in the values file, some are already configured
I was saying that examples of relabelings are already present in the values file, some as examples, others real configuration, you can draw from those.
For your own applications' metrics, you can define scrape configs by means of service monitors, pod monitors or native Prometheus' configuration. A single scrape configuration will affect targets and their metrics that match that very configuration. E.g. dropping a label in one scrape config will not affect labels of metrics scraped by other scrape configs.
Sorry, but I will add my question about labels here.
I need to drop out all the metrics that come from the resources in the one of the namespaces. This is quite a large number of pods, so it would be optimal to do this at the scraping stage, so as not to overload the system.
If I try to do something like this
kubelet:
serviceMonitor:
relabelings:
- sourceLabels: [__meta_kubernetes_namespace]
regex: (gtm.*)
action: drop
kube-state-metrics:
prometheus:
monitor:
relabelings:
- sourceLabels: [__meta_kubernetes_namespace]
regex: (gtm.*)
action: drop
prometheus:
serviceMonitor:
relabelings:
- sourceLabels: [__meta_kubernetes_namespace]
regex: (gtm.*)
action: drop
and it does not bring any result, I guess because it is already marked in the namespace
label above in the configuration
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
...
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (gtm.*)
replacement: $1
action: drop
Doing drops via metricRelabelings
I think that is not optimal from the point of view of the use of resources. Please tell me where to go next?