falcosecurity/charts

falcosidekick config.existingSecret causing conflicts in helm chart v0.8.2

Closed this issue · 3 comments

Describe the bug
In falcosidekick helm chart v0.8.2 when using config.existingSecret the falcosidekick default secret is also loaded into env causing conflicts and crashLoops in falcosidekick

How to reproduce it
Uside config.existingSecret that loads key:value pairs that also exists in the default falcosidekick secret

Expected behaviour
When using config.existingSecret falcosidekick helm chart should only use that secret and not the default one created by the helm chart.

Screenshots
This is most likely caused by this change
https://github.com/falcosecurity/charts/commit/f8205957df7b0a6b7fde6a83fe16d6bc8f0e24be#diff-847ce2ce19ea600eb056e59[…]76c4c12fe8d5b689f4768R98-R104

Helm chart v0.7.22 (uses either existingSecret OR default created by helm chart)

            - secretRef:
                {{- if .Values.config.existingSecret }}
                name: {{ .Values.config.existingSecret }}
                {{- else }}
                name: {{ include "falcosidekick.fullname" . }}
                {{- end }}

Helm chart v0.8.2 (always load the default from helm chart AND existingSecret if used)

            - secretRef:
                name: {{ include "falcosidekick.fullname" . }}
            {{- if .Values.config.existingSecret }}
            - secretRef:
                name: {{ .Values.config.existingSecret }}
            {{- end }}

Environment

  • Falco version:
  • System info:
  • Cloud provider or hardware configuration: AWS EKS
  • OS: Linux
  • Kernel:
  • Installation method: Kubernetes Helm

Additional context
Slack discussion: https://kubernetes.slack.com/archives/CMWH3EH32/p1720704347316519

Hi,

This is a feature requested several times by the users using an external secret manager like Vault. I might have introduce a bug. I'll dig. Thanks.

After some tests, I'm pretty confident the new way to manage the secrets works. You can mix together secrets created by the chart, secrets created by an external manager, and even use them as extraEnv if you want.

The error in your situation comes from your customFields value (see together on Slack):

config:
  customfields: "env:dev,source:falco"

Wich creates this error:

panic: descriptor Desc{fqName: "falco_events", help: "", constLabels: {}, variableLabels: {hostname,rule,priority,source,k8s_ns_name,k8s_pod_name,env,source}} is invalid: duplicate label names in constant and variable labels for metric "falco_events"                   

This is because the customFields are automatically used as labels for the prometheus metrics. but source is reserved by prometheus. By replacing it by event.source or else, the pods are up & running.

ended up using the following config to workaround this issue

config:
  customfields: "env:dev,service:falco"