zitadel/zitadel-charts

[Bug]: masterkey or masterkeySecretName values not recognized when used in an umbrella Chart

Closed this issue ยท 17 comments

Preflight Checklist

  • I could not find a solution in the documentation, the existing issues or discussions
  • I have joined the ZITADEL chat

Version

7.6.0

App version

No response

Describe the problem caused by this bug

Hello,
I'm trying to deploy zitadel with argoCD using app-of-apps pattern using umbrella charts.
Structure is:
zitadel-folder
-- chart.yaml
-- values.yaml

The issue is that masterkey doesn't seem to be recognised when deployed in this way
Screenshot 2024-01-29 at 02 29 19

Screenshot 2024-01-29 at 02 35 19

The 2 screenshots are taken using the file present at https://github.com/zitadel/zitadel-charts/blob/main/examples/2-postgres-secure/zitadel-values.yaml
I've tested with a personal values file or either with one of the example provided in the git in the example folder.
Still same error either setting masterkey or masterkeySecretName values.
This is not the first app I deploy in this way, i have about 20.

To reproduce

  1. set up argocd
  2. set up app-of-apps pattern
  3. create a folder for zitadel and the chart and values files
  4. deploy

Logs

No response

Expected behavior

Be able to deploy zitadel

Relevant Configuration

K8s version: rke2
ArgoCD version: 2.9.5

Additional Context

this is my personal values.yaml

zitadel:
  replicaCount: 2
  masterkeySecretName: zitadel-masterkey
  configmapConfig:
    Database:
      Postgres:
        Host: postgres-cluster-zitadel-rw.svc.cluster.local
        Port: 5432
        Database: zitadel
        MaxOpenConns: 20
        MaxIdleConns: 10
        MaxConnLifetime: 30m
        MaxConnIdleTime: 5m
        User:
          Username: zitadel
          SSL:
            Mode: verify-full
        Admin:
          Username: postgres
          SSL:
            Mode: verify-full
  dbSslUserCrtSecret: zitadel-cluster-user-cert
  service:
    type: LoadBalancer
    # If service type is "ClusterIP", this can optionally be set to a fixed IP address.
    clusterIP: ""
    port: 8080
    protocol: http2
    annotations: {}
  metrics:
    enabled: true
    serviceMonitor:
    # If true, the chart creates a ServiceMonitor that is compatible with Prometheus Operator
    # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor.
    # The Prometheus community Helm chart installs this operator
    # https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#kube-prometheus-stack
      enabled: true

Fro reference this is the manifest i'm using
https://github.com/ilbarone87/testing

Added few configuration details like k8s and argocd version, and repo link

Hm I started looking into this and do not see what breaks.

Can you provide me with a rendered output of the files?

@fforootd this is what i get if i try to render from local

helm install zitadel --dry-run --debug testing/app-of-apps-testing/system/zitadel/
install.go:214: [debug] Original chart version: ""
install.go:231: [debug] CHART PATH: /root/testing/app-of-apps-testing/system/zitadel

Error: INSTALLATION FAILED: execution error at (zitadel/charts/zitadel/templates/secret_zitadel-masterkey.yaml:2:4): Either set .Values.zitadel.masterkey xor .Values.zitadel.masterkeySecretName
helm.go:84: [debug] execution error at (zitadel/charts/zitadel/templates/secret_zitadel-masterkey.yaml:2:4): Either set .Values.zitadel.masterkey xor .Values.zitadel.masterkeySecretName
INSTALLATION FAILED
main.newInstallCmd.func2
        helm.sh/helm/v3/cmd/helm/install.go:154
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.8.0/command.go:983
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.8.0/command.go:1115
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.8.0/command.go:1039
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:267
runtime.goexit
        runtime/asm_amd64.s:1650

Removing the offending IF the helm renders without issues

dbSslCaCrtSecret: ""
  dbSslUserCrtSecret: ""
  masterkey: THISISATEST
  masterkeyAnnotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  masterkeySecretName: THISISATEST
  secretConfig: null
  secretConfigAnnotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  selfSignedCert:
    additionalDnsName: null
    enabled: false

HOOKS:
---
# Source: zitadel/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: zitadel
  labels:
    helm.sh/chart: zitadel-7.6.1
    app.kubernetes.io/name: zitadel
    app.kubernetes.io/instance: zitadel
    app.kubernetes.io/version: "v2.41.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
---
# Source: zitadel/templates/secret_zitadel-masterkey.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: zitadel-masterkey
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  labels:
    helm.sh/chart: zitadel-7.6.1
    app.kubernetes.io/name: zitadel
    app.kubernetes.io/instance: zitadel
    app.kubernetes.io/version: "v2.41.1"
    app.kubernetes.io/managed-by: Helm
stringData:
  masterkey: THISISATEST

Removing the offending IF the helm renders without issues

dbSslCaCrtSecret: ""
  dbSslUserCrtSecret: ""
  masterkey: THISISATEST
  masterkeyAnnotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  masterkeySecretName: THISISATEST
  secretConfig: null
  secretConfigAnnotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  selfSignedCert:
    additionalDnsName: null
    enabled: false

HOOKS:
---
# Source: zitadel/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: zitadel
  labels:
    helm.sh/chart: zitadel-7.6.1
    app.kubernetes.io/name: zitadel
    app.kubernetes.io/instance: zitadel
    app.kubernetes.io/version: "v2.41.1"
    app.kubernetes.io/managed-by: Helm
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
---
# Source: zitadel/templates/secret_zitadel-masterkey.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: zitadel-masterkey
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation
    helm.sh/hook-weight: "0"
  labels:
    helm.sh/chart: zitadel-7.6.1
    app.kubernetes.io/name: zitadel
    app.kubernetes.io/instance: zitadel
    app.kubernetes.io/version: "v2.41.1"
    app.kubernetes.io/managed-by: Helm
stringData:
  masterkey: THISISATEST

Well that is interesting... @eliobischof @stebenz do you have a clue why this could be?

A quick google on my end did not bring up any good explanation

@fforootd also I think the problem is restricted to when the helm install requires to run dependency build before as the repo is pulled from remote in my Chart.yaml. For some reason doesnโ€™t like that build.
Indeed if I git clone your chart in my local and have all the templates yaml available in the same folder there is no issue.

Hm that is an interesting hypothesis ๐Ÿ˜

@eliobischof do we need to IF or could we remove that without any notable impact?

@fforootd any update here?

Lets wait for @eliobischof comment ๐Ÿ˜

@fforootd @eliobischof any update on this?

@fforootd @eliobischof any update on this?

I think next week I have time to check it

@ilbarone87 recently with #162, we made the helm hooks for secrets and config maps configurable.

Can you try to replace the helm hooks by argo hooks? If it doesn't work for you, I'll reopen the issue.

@eliobischof @fforootd hmm doesn't seem to be working
image

image

@eliobischof Might be worth a shout to helm folks? can be that helm template is broken when has to render multiple conditions like the ones in zitadel template. {{- if (or (and ... ...) (and (not ...) (not ...) ) }}

I got it now.
Because you define zitadel in an umbrella chart as a dependency that you call zitadel, you also need to specify the dependencies values under the yaml property that matches your dependencies name.

So your umbrella values should look similar to this:

zitadel:
  zitadel:
    replicaCount: 2
    masterkeySecretName: zitadel-masterkey
# ... the rest of your config indented correctly

If this also doesn't work, I'll reopen the issue again.

@eliobischof Legend! That worked! Clearly messed up the indentation and other values haven't been picked up but defo has rendered fully. Very weird as I have all my deployments using umbrella charts and the n none of them I have to set it like this. They all like:

Prometheus:
  enabled: true
  service: loadbalancer 
  etc...etc...

Thanks a lot