fluxcd/flux2

Moving one kustomization from local to remote cluster does not delete workloads in local

Closed this issue · 7 comments

Describe the bug

Install this kusto on local Flux cluster

---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: service
  namespace: ${SERVICE_NAME}
spec:
  dependsOn:
  - name: generic-occne-kubeconfig
  - name: generic-occne
  force: true
  interval: 5m0s
  path: ./overlays/service
  postBuild:
    substitute:
      SERVICE_NAME: ${SERVICE_NAME}
    substituteFrom:
    - kind: ConfigMap
      name: service-clustervars-env
      optional: false
    - kind: ConfigMap
      name: cronjob-env
      optional: true
  prune: true
  sourceRef:
    kind: GitRepository
    name: ${SERVICE_NAME}
    namespace: ${SERVICE_NAME}
  targetNamespace: ${SERVICE_NAME}

Then, change your mind, and decide to install kusto on remote cluster thanks to kusto update

spec
  kubeConfig:
    secretRef:
      name: occne-target-kubeconfig

Workloads first created on local Flux cluster are not deleted

Steps to reproduce

See above

Expected behavior

Workloads first created on local Flux cluster should be deleted, shouldn't they ?

Screenshots and recordings

No response

OS / Distro

Linux

Flux version

2.3.0

Flux check

► checking prerequisites
✔ Kubernetes 1.28.11+rke2r1 >=1.28.0-0
► checking version in cluster
✔ distribution: flux-v2.3.0
✔ bootstrapped: true
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v1.0.1
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v1.3.0
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v1.3.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v1.3.0
► checking crds
✔ alerts.notification.toolkit.fluxcd.io/v1beta3
✔ buckets.source.toolkit.fluxcd.io/v1beta2
✔ gitrepositories.source.toolkit.fluxcd.io/v1
✔ helmcharts.source.toolkit.fluxcd.io/v1
✔ helmreleases.helm.toolkit.fluxcd.io/v2
✔ helmrepositories.source.toolkit.fluxcd.io/v1
✔ kustomizations.kustomize.toolkit.fluxcd.io/v1
✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2
✔ providers.notification.toolkit.fluxcd.io/v1beta3
✔ receivers.notification.toolkit.fluxcd.io/v1
✔ all checks passed

Git provider

No response

Container Registry provider

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

You should rename the Flux Kustomization, workloads are tracked by their group/kind/name/namespace, not cluster.

Thx for you swith reply @stefanprodan
I will give a try but it looks not as easy in my case because my Flux release intents are build up from kustomize components patches.

kustomization.yaml

---
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

patches:
  - path: release-patch.yaml
    target:
      group: kustomize.toolkit.fluxcd.io
      version: v1
      kind: Kustomization
      name: ....

release-patch.yaml

---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: .*
  namespace: ${SERVICE_NAME}
spec:
  kubeConfig:
    secretRef:
      name: occne-target-kubeconfig
  targetNamespace: ${SERVICE_NAME}

Maybe we should make the kubeConfig field immutable. This type of change was never supported, it’s the same as changing the label selectors in a Deployment, you cannot do that without deleting/creating the deployment.

Agreed @stefanprodan ...meanwhile, I gave a try to apply Kustomize builtin transformers to certain resources in order to distinguish Flux Kustomization along cluster target installation, but strangely metadata.name: myRelease is ignored making all Flux Kustomization append with -occne but that fits the bill

transformers:
- |-
  apiVersion: builtin
  kind: PrefixSuffixTransformer
  metadata:
    name: myRelease
  suffix: -occne
  fieldSpecs:
  - path: metadata/name
    group: kustomize.toolkit.fluxcd.io
    kind: Kustomization
    version: v1

I gonna initiate a PR to make the kubeConfig field immutable ...

There is a major issue with that, people should be rotating the token in the kubeconfig, this is usually done using Kustomize secretgen that changes the secret name. Making that field readonly would break a valid usecase. The cluster doesn’t change but the the auth does.

You are right @stefanprodan ... As a conclusion, Flux kustomization names must differ according to k8s cluster target