Updating specific service's image tag in Values.yaml file triggers all the other services' deployment
nobilexdev opened this issue · 4 comments
Describe the bug
We are experiencing an issue with Flux v2 where updating the image tag for a specific service in the values.yaml file results in the deployment of all services, even though the change is only relevant to one service.
Steps to reproduce
- Have a HelmRelease configured with multiple services.
- Update the image tag for one specific service in the values.yaml file.
- Observe that all services are redeployed, not just the one with the updated image tag.
Expected behavior
Only the service with the updated image tag should be redeployed.
Screenshots and recordings
No response
OS / Distro
Linux
Flux version
2.3.0
Flux check
► checking prerequisites
✔ Kubernetes 1.28.11-gke.1019001 >=1.28.0-0
► checking version in cluster
✔ distribution: flux-v2.3.0
✔ bootstrapped: true
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v1.0.1
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v1.3.0
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v1.3.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v1.3.0
► checking crds
✔ alerts.notification.toolkit.fluxcd.io/v1beta3
✔ buckets.source.toolkit.fluxcd.io/v1beta2
✔ gitrepositories.source.toolkit.fluxcd.io/v1
✔ helmcharts.source.toolkit.fluxcd.io/v1
✔ helmreleases.helm.toolkit.fluxcd.io/v2
✔ helmrepositories.source.toolkit.fluxcd.io/v1
✔ kustomizations.kustomize.toolkit.fluxcd.io/v1
✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2
✔ providers.notification.toolkit.fluxcd.io/v1beta3
✔ receivers.notification.toolkit.fluxcd.io/v1
✔ all checks passed
Git provider
No response
Container Registry provider
No response
Additional context
This issue did not occur in Flux v1, suggesting a regression or change in behavior in Flux v2.
Code of Conduct
- I agree to follow this project's Code of Conduct
This has nothing to do with Flux, check the Helm chart source code to see why it happens. Test it with helm upgrade
.
This has nothing to do with Flux, check the Helm chart source code to see why it happens. Test it with
helm upgrade
.
Sorry @stefanprodan, I would disagree on this, as same helm chart source is perfectly working fine with flux v1.
@stefanprodan | I discovered the issue was with the metadata label helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
(which is part of the recommended Helm chart metadata).
This label was causing full redeployments. After removing it, the unnecessary redeployments stopped. However, I'm curious why this wasn't an issue in Flux v1.
However, I'm curious why this wasn't an issue in Flux v1.
Probably some bug in helm-operator around label updates, Flux v1 has been archived years ago so it's irelevant how it behaved.