kubernetes-csi/csi-driver-nfs

Updating chart content without incrementing the version

Opened this issue · 8 comments

What happened:

The following commit updated the chart content without incrementing the version tag. I'm not sure if this is intended but this leads to updating all helm deployments to that content without noticing directly.

30c0f8f

Bildschirmfoto 2024-07-18 um 12 01 09

What you expected to happen:

If a new version gets released, the version should be incremented. Older versions should not be changed.

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

@woehrl01 that's intended, mainly for fixing the CVEs in sidecar containers.

@andyzhangx I see, but shouldn't a change, even for CVEs, result in a version increment?

I'm unable to use the newer chart version since this images doesn't exist:

$ docker pull registry.k8s.io/sig-storage/nfsplugin:v4.8.0
Error response from daemon: manifest for registry.k8s.io/sig-storage/nfsplugin:v4.8.0 not found: manifest unknown: Failed to fetch "v4.8.0"
Failed to pull image "registry.k8s.io/sig-storage/nfsplugin:v4.8.0": rpc error: code = NotFound desc = failed to pull and unpack image "registry.k8s.io/sig-storage/nfsplugin:v4.8.0": failed to resolve reference "registry.k8s.io/sig-storage/nfsplugin:v4.8.0": registry.k8s.io/sig-storage/nfsplugin:v4.8.0: not found

The version is pinned here:

repository: registry.k8s.io/sig-storage/nfsplugin
tag: v4.8.0

Ah okay thanks. Only looked at recent issues.

I'm just gonna add a 4 hour delay to renovate for this helm chart :)

@andyzhangx I see, but shouldn't a change, even for CVEs, result in a version increment?

100% agreed. Pretty much every other chart I've ever seen increments it's patch version for image patching. It's very important in any kind of environment that has strict change control not to introduce spontaneous changes without a change in config (kind of the whole point of infrastructure-as-code).

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

/remove-lifecycle stale