grafana/k8s-monitoring-helm

Component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope when upgrading to Alloy

winterrobert opened this issue · 4 comments

It feels like I've missed something obvious with my Flow config when upgrading from grafana-agent to alloy - can somebody help me?

I'm using the k8s-monitoring chart and just upgraded to 1.0.12 which replaces grafana-agent with grafana-alloy. I updated my values.yaml according to the breaking changes but alloy doesn't start, throwing an error:

component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope

Do I need to remove any prometheus CRDs and let the helm chart resinstall them? I'm fairly confident that prometheus.relabel.metrics_service.receiver should exist as it's listed in the custom flow example in the docs.

Alloy version:

❯ kubectl -n grafana-alloy-k8s-monitoring describe po grafana-alloy-k8s-monitoring-0 | grep Image
    Image:         docker.io/grafana/alloy:v1.1.0

Alloy logs:

❯ kubectl -n grafana-alloy-k8s-monitoring logs grafana-alloy-k8s-monitoring-0

Error: /etc/alloy/config.alloy:83:21: component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope

82 |     honor_labels = true
83 |     forward_to   = [prometheus.relabel.metrics_service.receiver]
   |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
84 | }

Error: /etc/alloy/config.alloy:141:17: component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope

140 | prometheus.relabel "metrics_default_integrations_rabbitmq" {
141 |   forward_to = [prometheus.relabel.metrics_service.receiver]
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
142 | 

Error: /etc/alloy/config.alloy:173:17: component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope

172 | prometheus.relabel "integrations_redis_exporter" {
173 |   forward_to = [prometheus.relabel.metrics_service.receiver]
    |                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
174 | 

Error: /etc/alloy/config.alloy:111:21: component "prometheus.relabel.metrics_service.receiver" does not exist or is out of scope

110 |     honor_labels = true
111 |     forward_to   = [prometheus.relabel.metrics_service.receiver]
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
112 | }
Error: could not perform the initial load successfully

My values.yaml / extraConfig:

k8s-monitoring:
  extraConfig: |-
    discovery.relabel "metrics_clickhouse" {
        targets = discovery.kubernetes.endpoints.targets
        rule {
            source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
            regex = "altinity-clickhouse-operator"
            action = "keep"
        }
        rule {
            source_labels = ["__meta_kubernetes_pod_annotation_prometheus_io_port"]
            regex = "8888"
            action = "keep"
        }

        rule {
            source_labels = ["__meta_kubernetes_pod_label_clickhouse_altinity_com_namespace"]
            target_label = "clickhouse_cluster"
        }
        rule {
            source_labels = ["__meta_kubernetes_pod_name"]
            target_label = "instance"
        }

        rule {
            source_labels = ["__meta_kubernetes_pod_label_clickhouse_altinity_com_namespace"]
            target_label = "exported_namespace"
        }
        rule {
            source_labels = ["__meta_kubernetes_pod_label_clickhouse_altinity_com_chi"]
            target_label = "chi"
        }
        rule {
            source_labels = ["__meta_kubernetes_pod_label_clickhouse_altinity_com_namespace"]
            target_label = "hostname"
        }
    }

    prometheus.scrape "metrics_clickhouse" {
        job_name     = "integrations/clickhouse"
        targets      = discovery.relabel.metrics_clickhouse.output
        honor_labels = true
        forward_to   = [prometheus.relabel.metrics_service.receiver]
    }

    discovery.relabel "metrics_trivy" {
        targets = discovery.kubernetes.endpoints.targets
        rule {
            source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
            regex = "trivy-operator"
            action = "keep"
        }
        rule {
            source_labels = ["__meta_kubernetes_pod_container_port_number"]
            regex = "8080"
            action = "keep"
        }
    }

    prometheus.scrape "metrics_trivy" {
        job_name     = "integrations/trivy"
        targets      = discovery.relabel.metrics_trivy.output
        honor_labels = true
        forward_to   = [prometheus.relabel.metrics_service.receiver]
    }

My values.yaml - I basically copied the settings from the guide in my grafana cloud account / infrastructure / Kubernetes and updated it according to the list of breaking changes:

cluster:
  name: my-cluster
externalServices:
  prometheus:
    host: xyz
    basicAuth:
      username: "xyz"
      password: REPLACE_WITH_ACCESS_POLICY_TOKEN
  loki:
    host: xyz
    basicAuth:
      username: "xyz"
      password: REPLACE_WITH_ACCESS_POLICY_TOKEN
metrics:
  enabled: true
  cost:
    enabled: true
  node-exporter:
    enabled: true
logs:
  enabled: true
  pod_logs:
    enabled: true
  cluster_events:
    enabled: true
traces:
  enabled: false
receivers:
  grpc:
    enabled: false
  http:
    enabled: false
  zipkin:
    enabled: false
opencost:
  enabled: true
  opencost:
    exporter:
      defaultClusterId: my-cluster
    prometheus:
      external:
        url: xyz
kube-state-metrics:
  enabled: true
prometheus-node-exporter:
  enabled: true
prometheus-operator-crds:
  enabled: true
alloy: {}
alloy-events: {}
alloy-logs: {}
EOF

i responded on the public slack, but I'll ask here too:

prometheus.relabel.metrics_service.receiver should be correct. I'm not sure why it says its missing.
Next thing I'd look as is the ConfigMap for alloy and see if there's a prometheus.relabel "metrics_service" component defined.

I noticed that your first values file shows

k8s-monitoring:
  extraConfig: |-

How are you deploying this chart? with helm install grafana/k8s-monitoring or through some other means?

Thanks @petewall looking at the configmap helped me solve it.

kubectl -n grafana-alloy-k8s-monitoring get cm grafana-alloy-k8s-monitoring -o yaml

I was setting metrics.enabled = false for our QA cluster and still passing the extraConfig and looking at the k8s-monitoring code I noticed you've added some checks a couple of months ago that I guess doesn't generate prometheus.relabel.metrics_service.receiver etc when metrics are disabled.

Removing my extraConfig when settings metrics.enabled = false got alloy working.

Glad to hear you figured it out!