haxsaw/hikaru

Error when trying to patch/replace an object

arikalon1 opened this issue · 7 comments

Thanks for the new version. It's great!

When trying yo patch/replace an existing Kubernetes object (configmap/deployment etc), the api returns an error regarding the creationTimestamp field (see the error below)
Removing this field from the generated 'clean_dict' before saving solved it, but I'm not sure this is the correct solution

Code sample:
dep: Deployment = Deployment.readNamespacedDeployment("my-deployment", "default").obj
dep.spec.replicas += 1
dep.patchNamespacedDeployment("my-deployment", "default")

error:
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ObjectMeta: v1.ObjectMeta.UID: SelfLink: ResourceVersion: Namespace: Name: CreationTimestamp: unmarshalerDecoder: parsing time "2021-05-15T12:53:45" as "2006-01-02T15:04:05Z07:00": cannot parse "" as "Z07:00", error found in #10 byte of ...|T12:53:45", "name": |..., bigger context ...|data": {"creationTimestamp": "2021-05-15T12:53:45", "name": "jobs-states", "namespace": "robusta", "|...","reason":"BadRequest","code":400}

stack trace:
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/hikaru/model/rel_1_16/v1/v1.py", line 12134, in replaceNamespacedConfigMap
result = the_method(**all_args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 25568, in replace_namespaced_config_map_with_http_info
return self.api_client.call_api(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 399, in request
return self.rest_client.PUT(url,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/rest.py", line 284, in PUT
return self.request("PUT", url,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request

aantn commented

I'm opening a PR fixing this. The fix is a temporary workaround - a better fix is probably appropriate.

aantn commented

Some relevant context: kubernetes-client/python#730

This is interesting; since clean_dict() is only stripping out keys with None values, it would have left alone any datetimes that were supplied to it from the initial read. That means that the underlying client is providing values that itself cannot consume. I can replicate this problem in my integration tests; I'll have a look at the PR.

aantn commented

The problem is the roundtrip from APIServer -> Python Datatime -> APIServer. It's also worth pointing out the Hikaru doesn't have a concept of datatime fields at all, rather just str fields. Not sure if that is impacting this or not.

I think we're saying the same thing here. Hikaru does nothing special regarding any field's type unless it is a nested object. So yes, it's due to the round trip, but again I find it interesting that a field that Hikaru doesn't touch is objected to when it goes back into the client that created it.

In terms of what's in the swagger, creationTimestamp's type is listed as 'string' but it's format is simply 'date-time'. I haven't been looking at format thus far as there have been few benefits in Python (int32 vs int64 for example). Even so, it appears to have something to do with timezones, and there's nothing in the format that let's us algorithmically determine that they are important. There are a few other fields that I imagine we'll run into similar problems (Quantity comes to mind), and we'll probably need solutions similar to the PR.

aantn commented

Yeah, understood.

Added to 0.5b