netdata/helmchart

serviceaccount.create = false doesn't work

acuteaura opened this issue · 7 comments

the SA name is hardcoded into the deployment regardless, so all you get it a stuck replicaset

Hi @acuteaura

Let's see our docs

serviceAccount.create
if true, create a service account

Our manifests

serviceaccount.yaml

We create ServiceAccount only if serviceAccount.create is true - so far our docs is correct.

{{- if .Values.serviceAccount.create -}}
kind: ServiceAccount
apiVersion: v1
metadata:
labels:
app: {{ template "netdata.name" . }}
chart: {{ template "netdata.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ .Values.serviceAccount.name }}
{{- end -}}

clusterrolebinding.yaml

Regardless serviceAccount.create we bind ClusterRole to serviceAccount.name.

subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccount.name }}

deployment.yaml (parent)

Regardless serviceAccount.create we set serviceAccountName to serviceAccount.name.

serviceAccountName: {{ .Values.serviceAccount.name }}

daemonset.yaml (child)

Regardless serviceAccount.create we set serviceAccountName to serviceAccount.name.

serviceAccountName: {{ .Values.serviceAccount.name }}


It looks like if you just set serviceAccount.create to false it breaks installation, but that is kind of expected 🤔

@acuteaura what is your goal? Get Netdata running and working with default service account?

We use only the parent netdata as a streaming endpoint with all collectors disabled for machines doing CI outside a cluster and then scrape the data with prometheus, so there is no need for our instance to ever call the kubernetes API.

So your setup is parent installed inside k8s and children directly on hosts and they stream to the parent?

If so, perhaps this will do:

  • rbac.create: false
  • serviceAccount.create: false
  • serviceAccount.name: default

That'd work, I guess I just expected it to work differently.

Unfortunately our use case went far far outside the scope of this helm chart, so I had to fork and hardcode some things. I'm hoping I can generalize some of those changes (multiple ports, mostly) and upstream it eventually.

Thanks for your time.

our use case went far far outside the scope of this helm chart, so I had to fork and hardcode some things.

Have you considered creating an issue(s)?

If you don't feel like doing it now - would be nice to know what you had to hardcode, can you tell? I am really curious!

our use case went far far outside the scope of this helm chart, so I had to fork and hardcode some things.

Have you considered creating an issue(s)?

If you don't feel like doing it now - would be nice to know what you had to hardcode, can you tell? I am really curious!

We need to allow streaming from the internet, while restricting access to the other parts that are usually on the same listener. We figured out too late streaming was not actually encapsulated by HTTP, so we couldn't use our usual authentication mechanisms (IAP Proxy). ACLs weren't a great solution either because the service CIDR range is not always the same and Google LBs - if they don't use the PROXY protocol - also originate their traffic in 10.0.0.0/8, so we added a second port to the deployment for things like readiness/liveness, prometheus scraping or debug access to the dashboard that is not exposed.

Thanks for sharing!