prometheus-community/helm-charts

[kube-state-metrics] Unable to run kube-state-metrics in namespaced mode

alita1991 opened this issue · 2 comments

Describe the bug a clear and concise description of what the bug is.

W0628 16:10:51.825280       1 reflector.go:539] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "namespaces" in API group "" at the cluster scope
E0628 16:10:51.825320       1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "namespaces" in API group "" at the cluster scope
W0628 16:10:56.437568       1 reflector.go:539] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
E0628 16:10:56.437608       1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.CertificateSigningRequest: failed to list *v1.CertificateSigningRequest: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
W0628 16:11:04.475939       1 reflector.go:539] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0628 16:11:04.476119       1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
W0628 16:11:12.193330       1 reflector.go:539] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
E0628 16:11:12.193375       1 reflector.go:147] pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.MutatingWebhookConfiguration: failed to list *v1.MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User "system:serviceaccount:argocd-openshift:observability-kube-state-metrics" cannot list resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope

What's your helm version?

3.14.0

What's your kubectl version?

v1.29.3

Which chart?

kube-state-metrics

What's the chart version?

5.19.0

What happened?

After I installed kube-state-metrics in namespaced mode (disabled cluster-role and configured namespace), I decided to check the logs for the container and discovered a lot of RBAC issues caused by collection of cluster-wide resources.

What you expected to happen?

When namespace mode is activated (via enabling role), cluster-wide resources should not be watched.

How to reproduce it?

  1. Install kube-state-metrics using a similar configuration:
rbac:
  useClusterRole: false
securityContext:
  runAsGroup: 1000700000
  runAsUser: 1000700000
  fsGroup: 1000700000
namespaces: "argocd-openshift"
  1. Check logs of the kube-state-metrics container

Enter the changed values of values.yaml?

rbac:
  useClusterRole: false
securityContext:
  runAsGroup: 1000700000
  runAsUser: 1000700000
  fsGroup: 1000700000
namespaces: "argocd-openshift"

Enter the command that you execute and failing/misfunctioning.

I installed via ArgoCD

Anything else we need to know?

I'm installing the service in a K8S cluster, where I have access to a namespace only.

After I installed kube-state-metrics in namespaced mode (disabled cluster-role and configured namespace), I decided to check the logs for the container and discovered a lot of RBAC issues caused by collection of cluster-wide resources.

The errors get raised because KSM is being asked to query cluster resources by the collector configuration (field collectors does not get adjusted in these conditions). If running with a role on selected namespaces, the collectors below cannot succeed and should be disabled. In other words, one should enable only collectors of namespaced kinds.

- certificatesigningrequests
- mutatingwebhookconfigurations
- namespaces
- nodes
- persistentvolumes
- storageclasses
- validatingwebhookconfigurations
- volumeattachments

A configuration sample for namespaced discovery in the release namespace only:

rbac:
  useClusterRole: false
  useExistingRole: false
releaseNamespace: true
collectors:
  - configmaps
  - cronjobs
  - daemonsets
  - deployments
  - endpoints
  - horizontalpodautoscalers
  - ingresses
  - jobs
  - leases
  - limitranges
  - networkpolicies
  - persistentvolumeclaims
  - poddisruptionbudgets
  - pods
  - replicasets
  - replicationcontrollers
  - resourcequotas
  - secrets
  - services
  - statefulsets

Thank you for your quick response, is working now.