kubernetes/kubernetes

Allow forcing a put even when metadata.resourceVersion mismatches

Opened this issue ยท 33 comments

atm when the resourceVersion mismatches kubectl pulls the latest and then retries, which is pointless since it just does the same request after
our usecase is similar, using the api to PUT the expected state, not caring for what the cluster thinks it is ... but we still get metadata.resourceVersion: Invalid value errors which is annoying and means we have to do a round-trip to get the latest version which might fail again if someone else updated the resource

... so support ?force=true or ?noResourceVersion to disable this

/sig api

The resource version is used to avoid multiple requests to update the same object at the same time. I think there should be some way to force the update, such as not passing the resource version in the update. i will investigate this.

/assign

/sig api-machinery

/kind need-help

Hi, I'm just wondering given the fact that I already create a kubernetes service and the re-apply kubectl apply -f service.yaml command, I'll get the following error:

* metadata.resourceVersion: Invalid value: "": must be specified for an update
* spec.clusterIP: Invalid value: "": field is immutable

is this expected?

just fyi: i also am having this issue when trying to force apply my crds

I am having the same issue with ClusterIssuer (certmanager.k8s.io/v1alpha1 to be specific)

Same issue with deployment config via ansible k8s...any workarounds? I need automatically update deployment if i change config map >.<

@Asgoret how you perform the update?

@irvifa
I deploy configmap, then deploy dc again, then start deploy task. So, for example: 1 (running) -> 2 (aborted) -> 3 (running). Here is ansible tasks:

  - name: Check exist of deployment config
    k8s_facts:
      verify_ssl: "{{ verify_ssl }}"
      api_key: "{{ deploy_token }}"
      host: "{{ master_api }}"
      api_version: v1
      kind: DeploymentConfig
      name: "{{ service_name }}"
      namespace: "{{ namespace }}"
    register: deployments

  - name: Rollout pod with new config
    k8s:
      verify_ssl: "{{ verify_ssl }}"
      api_key: "{{ deploy_token }}"
      host: "{{ master_api }}"
      state: "{{ state }}"
      definition: "{{ lookup ('template', 'deploymentconfig.yaml.j2')}}"
    when: deployments.resources is defined

UPD: There is separate task for deploy dc, this is part of configmap deploy task

Mine seems like related to #11237

@irvifa As i can see it's familiar issues for stateful variables... I think there is no forward mechanism for update deployment config

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Any updates on this? We are seeing this for consecutive kubectl applys of a BackendConfig.

Is the assumption that you can always kubectl apply the same resource twice not/no longer true?

I resolved this problem by editing the svc and removing the resourceVersion and clusterIP from Annotation section.
kubectl edit svc svc-name
(remove resourceVersion and clusterIP from annotation)
kubectl apply -f svc.yaml

that works for services, but some resources require a resourceVersion is sent in

In our case it turned out to be a different problem: the kubectl last applied configuration annotation contained the resourceVersion, which it should not. Simply running kubectl apply edit-last-applied-configuration to remove the resourceVersion was enough to make it work again.

I get an HTTP 422 when I try it with
client.customResource(crdContext).edit(namespace,name,stream)
Not sure what is exactly wrong, but I would also say, something is missing or wrong, because editing it manually works fine, createOrReplace also works fine.

(using fabric8 library)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Also had same issue and found this helpful article:
https://www.timcosta.io/kubernetes-service-invalid-clusterip-or-resourceversion/
(TLDR: remove last-applied-configuration annotation)

Is there a fix for this? It's impacting CRs and CRDs outside of the standard k8s objects. Has anybody figured out repo steps for getting into the bad state of a resourceVersion being in the last-applied-configuration annotation?

This is still an issue today. But as an work around, you can get resource version of the current resource object that is deployed, and edit the new resource to have that value. This worked for me and I can do it programmatically.
Something like this:

if newResource.metadata.resourceVerision == "" {
  version := getOldResource.metadata.resourceVerision
  newResource.metadata.resourceVerision = version
}

--force works.

I meet the issue too.
I defined 2 APIs, one is defined using aggregated API, other is defined using CRD.
I can update the aggregated one without resourceVersion, but meet error when I update the CR without resourceVersion.

... is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/lifecycle frozen