openshift/openshift-restclient-python

Status Update Bug with Apply Method

koflerm opened this issue · 10 comments

I have enabled the status sub resource for a CRD (Custom Resource Definition) and I am using the apply method to create the custom resources and later-on to update the status of the Custom Resource instances.. I have printed out the YAML body which I hand over to the apply method and it correctly includes the property of the status field with the desired value. The problem is the apply seems to be not working or rather it is not recognizing changes in the status property of the YAML which means the status property of the custom resources is not being updated.

@koflerm I think this is the intended behavior, according to the design proposal for the status subresource [1]:

If the /status subresource is enabled, the following behaviors change:

The main resource endpoint will ignore all changes in the status subpath. (note: it will not reject requests which try to change the status, following the existing semantics of other resources)
...

If you want to update the status subresource, you should be able to do that by accessing the status subresource and then sending an update to that endpoint, rather than the main resource endpoint. The code for that looks roughly like this:

import kubernetes
from openshift.dynamic import DynamicClient

api_version = 'apps.example.com/v1'
kind = 'Test'
name = 'test'
namespace = 'default'
desired_status = {
    'status': {
        'hello': 'world'
    }
}

client = DynamicClient(kubernetes.config.new_client_from_config())

cr_api = client.resources.get(api_version=api_version, kind=kind)
response = cr_api.status.patch(name=name, namespace=namespace, body=desired_status, content_type='application/merge-patch+json')

print(response)
  1. https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/customresources-subresources.md#status-behavior

@fabianvf Thanks for your quick reply! The problem is I am already sending the requests to the status subresource, not to the actual resource. But I am doing this with the following code.

custom_resource_body = {
    "apiVersion": "foo.bar.com/v1beta1",
    "kind": "FooBar",
    "metadata": {
        "finalizers": [
            "foo.bar.com/afinalizer"
        ],
        "name": "foobar",
        "namespace": "foobarnamespace",
    },
    "spec": {
        "image": "foobar:1.0",
        "queue": {
            "type": "memory"
        }
    },
    "status": {
        "podState": "Initialized"
    }
}


resource = self.dyn_client.resources.get(
   api_version=api_version, kind=kind, singular_name=singular_name
).subresources.get("status")

resource.apply(
  body=custom_resource_body, 
  namespace=namespace, 
  name=name, 
  **kwargs
)

In your code, I can see that you are using the patch method instead of your apply method. Is the apply method not supposed to be used for status or rather subresource updates?

@koflerm there is a minor bug with apply (you might think it's major) in that

  • if you do apply with a resource
  • and then the resource changes underneath
  • if you then apply with the same resource but such that the thing changing underneath hasn't changed in the spec being applied
  • it'll be ignored

The most obvious example of this that we've come across is as follows:

  1. apply a deployment X with replicas = 1
  2. kubectl scale deployment X --replicas=0
  3. update the deployment X with replicas = 1
  4. the deployment will still have replicas 0, as the apply mechanism thinks it hasn't changed between steps 1 and 3

kubectl apply does this better, and we do need to fix that bug - my question to you is could this bug explain your problem?

@willthames Well I am updating the Custom Resources from two different operators but both operators are using the apply method so I think the problem is not related to this bug.

@fabianvf a short update to your example it works fine if I use the patch method instead of the apply method but this still leaves the question open if your apply method is not supposed to work with the status subresource or rather all subresources.

@koflerm ah I see, yeah I think apply is probably not supposed to be used on subresources, we should probably prevent it from being called in the subresource class

@fabianvf yes I think that would be good. Or is it possible to implement support for subresources for the apply method?

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.