grafana/k8s-monitoring-helm

Opencost UI shows no values

Closed this issue · 3 comments

We are using the k8s-monitoring-helm with pretty standard configuration to connect our Azure Kubernetes Cluster with our Grafana Cloud (pro) stack. We are enabling opencost and the opencost-ui. We see that opencost metrics seem to get ingested succesfully in Grafana-Cloud, because our cost dashboards in the observability section of grafana cloud are showing data.

However the opencost UI does not seem to be showing the correct data (pretty much empty, it doesn't even have the namespace names).

Helm Chart version: 0.5.1

Our config:

    opencost:
      opencost:
        exporter:
          defaultClusterId: aks-aks-poc-dev
        prometheus:
          external:
            url: https://prometheus-prod-22-prod-eu-west-3.grafana.net/api/prom
        ui:
          enabled: true

The opencost pod is running, but in the logs we get the following error (logged as info) at startup:

│ 2023-11-23T16:14:53.265057082Z INF Success: retrieved the 'up' query against prometheus at: https://prometheus-prod-22-prod-eu-west-3.grafana.net/api/prom
│ 2023-11-23T16:14:53.270207892Z INF No valid prometheus config file at https://prometheus-prod-22-prod-eu-west-3.grafana.net/api/prom. Error: client_error: client error: 404 . Troubleshooting help available at: http://docs.kubecost.com/custom-prom#troubleshoot. Ignore if using cortex/thanos here.

In the UI we only get the following:

image

skl commented

That error is expected when using Mimir. Are there any other errors in the pod logs? Can you try and debug the opencost UI and see if its queries are working as expected?

@skl Thanks for the reply. Switching to debug logging didn't really reveal a lot more, the only other thing we got were warnings, seemingly related to the fact that we hadn't set up an integration to get the correct pricing info for Azure. My colleague set up the opencost UI through version 0.6.0 of this chart on another cluster and there the UI showed the data for all namespaces as expected, so it was probably an error on our end.

skl commented

Glad to know it's working now at least 👍