kubernetes/kubernetes

strategic patch: "unrecognized type" error not informative enough

Closed this issue · 22 comments

cben commented

What happened:
Given a wrong type, e.g. string where int was expected in a large yaml, oc apply as well as the underlying kubectl patch --type=strategic gives an error that's hard to act upon (if patch is large).

$ cluster/kubectl.sh patch --type=strategic -p '{"spec": {"replicas": 1}}'  deployment/sise
deployment.extensions/sise patched (no change)
$ cluster/kubectl.sh patch --type=strategic -p '{"spec": {"replicas": "1"}}'  deployment/sise
Error from server: unrecognized type: int32
  1. The wording sounds like the problem was an unexpected integer, but actually it means it expected integer but got something else. Worse, there are 2 code paths (FromUnstructured, ToUnstructured) giving exactly same error for different directions!
  2. No indication where in the patch the problem was. In above example the patch is tiny but using kubectl apply it's easy to get lost in a huge patch. apply gives more info, but the error on last line is still same error from patch:
$ oc apply -f deployment.yaml
deployment.apps/sise configured
$ oc apply -f deployment-string.yaml
Error from server: error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":null,\"generation\":1,\"labels\":{\"run\":\"sise\"},\"name\":\"sise\",\"namespace\":\"default\"},\"spec\":{\"replicas\":\"1\",\"selector\":{\"matchLabels\":{\"run\":\"sise\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":1,\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"run\":\"sise\"}},\"spec\":{\"containers\":[{\"image\":\"mhausenblas/simpleservice:0.5.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sise\",\"ports\":[{\"containerPort\":9876,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}}}\n"},"creationTimestamp":null},"spec":{"replicas":"1"}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "sise", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"sise" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/sise" "creationTimestamp":"2019-02-04T10:57:10Z" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":null,\"generation\":1,\"labels\":{\"run\":\"sise\"},\"name\":\"sise\",\"namespace\":\"default\"},\"spec\":{\"replicas\":1,\"selector\":{\"matchLabels\":{\"run\":\"sise\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":1,\"maxUnavailable\":1},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"run\":\"sise\"}},\"spec\":{\"containers\":[{\"image\":\"mhausenblas/simpleservice:0.5.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"sise\",\"ports\":[{\"containerPort\":9876,\"protocol\":\"TCP\"}],\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}}}\n"] "uid":"9ee94b73-286b-11e9-9a60-68f728fac3ab" "resourceVersion":"424" "generation":'\x01' "labels":map["run":"sise"]] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["run":"sise"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["run":"sise"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["name":"sise" "image":"mhausenblas/simpleservice:0.5.0" "ports":[map["containerPort":'\u2694' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-02-04T10:57:10Z" "reason":"MinimumReplicasAvailable" "message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-02-04T10:57:10Z"] map["type":"Progressing" "status":"True" "lastUpdateTime":"2019-02-04T10:57:12Z" "lastTransitionTime":"2019-02-04T10:57:10Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"sise-5fc86787d8\" has successfully progressed."]] "observedGeneration":'\x01'] "kind":"Deployment"]}
for: "deployment-string.yaml": unrecognized type: int32

What you expected to happen:

  • tell me I gave string "1" but expected an int.
  • ideally, tell me problem was in spec.replicas.

How to reproduce it (as minimally and precisely as possible):
https://gist.github.com/cben/9bbb982fb8fcf3d88c2c875d04e3a42c

  1. kubectl apply -f deployment.yaml
  2. kubectl patch --type=strategic -p '{"spec": {"replicas": "1"}}' deployment/sise
  3. kubectl apply -f deployment-string.yaml

Anything else we need to know?:
Originally I experienced this on OpenShift 3.11, where the UX is even worse: first create/apply is in some cases tolerant and accepts string instead of int (at least for containerPort), while subsequent apply/patch rejects it! But on upstream k8s master I see errors from first create/apply too so not relevant here.

Other patch formats use a different code path, giving a very informative error:

  • JSON Merge Patch

    $ cluster/kubectl.sh patch --type=merge -p '{"spec": {"replicas": 1}}'  deployment/sise
    deployment.extensions/sise patched
    $ cluster/kubectl.sh patch --type=merge -p '{"spec": {"replicas": "1"}}'  deployment/sise
    Error from server: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas":"1","revisi|..., bigger context ...|"spec":{"progressDeadlineSeconds":600,"replicas":"1","revisionHistoryLimit":10,"selector":{"matchLab|...
  • JSON Patch

    $ cluster/kubectl.sh patch --type=json -p '[{"op": "replace", "path": "/spec/replicas", "value": 1}]'  deployment/sise
    deployment.extensions/sise patched (no change)
    $ cluster/kubectl.sh patch --type=json -p '[{"op": "replace", "path": "/spec/replicas", "value": "1"}]'  deployment/sise
    Error from server: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas":"1","revisi|..., bigger context ...|"spec":{"progressDeadlineSeconds":600,"replicas":"1","revisionHistoryLimit":10,"selector":{"matchLab|...

kubectl edit also sometimes shows this error, see #26050 (comment)

Environment:

  • Kubernetes version (use kubectl version): built from master today:
    Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.230+cdfb9126d334ee-dirty", GitCommit:"cdfb9126d334eea722e34f3a895904bb152d53f0", GitTreeState:"dirty", BuildDate:"2019-02-04T10:49:37Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.230+cdfb9126d334ee-dirty", GitCommit:"cdfb9126d334eea722e34f3a895904bb152d53f0", GitTreeState:"dirty", BuildDate:"2019-02-04T10:49:37Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
    
  • Cloud provider or hardware configuration: ThinkPad T450s laptop
  • OS (e.g. from /etc/os-release): Fedora 29
  • Kernel (e.g. uname -a): Linux 4.19.15-300.fc29.x86_64 #1 SMP Mon Jan 14 16:32:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
cben commented

Working on a patch => #73695

I also hit this with kube-graffiti while trying to patch a deployment/v1beta1:

My json-patch:

[{ "op": "replace", "path": "/spec/replicas", "value": "3" }]

Error I get:

2019-03-13T10:42:48Z |ERRO| failed to patch object component=existing error="v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Replicas: readUint32: unexpected character: �, error found in #10 byte of ...|eplicas\":\"3\",\"revisi|..., bigger context ...|\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":\"3\",\"revisionHistoryLimit\":0,\"selector\":{\"matchLabe|..." group-version=extensions/v1beta1 kind=Deployment name=kube-apiserver namespace=shoot--test--backuptest rule=kube-api-changes-backuptest

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

cben commented

/remove-lifecycle stale

I need to address review feedback on my PR.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Not a single more detail about whats producing the issue at appliance time.
So I still guess this could be improved and would like to set it back to the backlog

Error from server: error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"logstash\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"logstash\"}},\"spec\":{\"containers\":[{\"command\":[\"logstash\"],\"env\":[{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_USERNAME\",\"value\":\"logstash_internal\"},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_URL\",\"value\":\"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']\"},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_USERNAME\",\"value\":\"logstash_internal\"},{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"XPACK_MANAGEMENT_PIPELINE_ID\",\"value\":\"main\"},{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_URL\",\"value\":\"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']\"},{\"name\":\"XPACK_MANAGEMENT_ENABLED\",\"value\":true},{\"name\":\"XPACK_MONITORING_ENABLED\",\"value\":true},{\"name\":\"XPACK_MANAGEMENT_ELASTICSEARCH_PASSWORD\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"logstash_internal_password\",\"name\":\"logstash\"}}},{\"name\":\"XPACK_MONITORING_ELASTICSEARCH_PASSWORD\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"logstash_internal_password\",\"name\":\"logstash\"}}}],\"image\":\"docker.elastic.co/logstash/logstash:6.7.2\",\"name\":\"logstash\",\"ports\":[{\"containerPort\":5044,\"name\":\"logstash\"}],\"volumeMounts\":[{\"mountPath\":\"/usr/share/logstash/config/\",\"name\":\"logstash-config\"},{\"mountPath\":\"/usr/share/logstash/certificate/\",\"name\":\"certificate\"},{\"mountPath\":\"/usr/share/logstash/patterns/\",\"name\":\"patterns\"},{\"mountPath\":\"/usr/share/logstash/pipeline/\",\"name\":\"main-pipeline-config\"}]}],\"volumes\":[{\"configMap\":{\"name\":\"logstash-config\"},\"name\":\"logstash-config\"},{\"configMap\":{\"name\":\"generalca\"},\"name\":\"certificate\"},{\"configMap\":{\"name\":\"patterns\"},\"name\":\"patterns\"}]}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"logstash"}],"$setElementOrder/volumes":[{"name":"logstash-config"},{"name":"certificate"},{"name":"patterns"}],"containers":[{"env":[{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_USERNAME","value":"logstash_internal"},{"name":"XPACK_MONITORING_ELASTICSEARCH_URL","value":"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']"},{"name":"XPACK_MONITORING_ELASTICSEARCH_USERNAME","value":"logstash_internal"},{"name":"NODE_NAME","valueFrom":{"fieldRef":{"fieldPath":"status.hostIP"}}},{"name":"XPACK_MANAGEMENT_PIPELINE_ID","value":"main"},{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_URL","value":"['https://192.168.1.13:9200','https://192.168.1.14:9200','https://192.168.1.15:9200']"},{"name":"XPACK_MANAGEMENT_ENABLED","value":true},{"name":"XPACK_MONITORING_ENABLED","value":true},{"name":"XPACK_MANAGEMENT_ELASTICSEARCH_PASSWORD","valueFrom":{"secretKeyRef":{"key":"logstash_internal_password","name":"logstash"}}},{"name":"XPACK_MONITORING_ELASTICSEARCH_PASSWORD","valueFrom":{"secretKeyRef":{"key":"logstash_internal_password","name":"logstash"}}}],"name":"logstash"}],"volumes":[{"$patch":"delete","name":"main-pipeline-config"}]}}}} to: Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment" Name: "logstash", Namespace: "default" Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"name\":\"logstash\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"k8s-app\":\"logstash\"}},\"spec\":{\"containers\":[{\"command\":[\"logstash\"],\"image\":\"docker.elastic.co/logstash/logstash:6.7.2\",\"name\":\"logstash\",\"ports\":[{\"containerPort\":5044,\"name\":\"logstash\"}],\"volumeMounts\":[{\"mountPath\":\"/usr/share/logstash/config/\",\"name\":\"logstash-config\"},{\"mountPath\":\"/usr/share/logstash/certificate/\",\"name\":\"certificate\"},{\"mountPath\":\"/usr/share/logstash/patterns/\",\"name\":\"patterns\"},{\"mountPath\":\"/usr/share/logstash/pipeline/\",\"name\":\"main-pipeline-config\"}]}],\"volumes\":[{\"configMap\":{\"name\":\"logstash-config\"},\"name\":\"logstash-config\"},{\"configMap\":{\"name\":\"generalca\"},\"name\":\"certificate\"},{\"configMap\":{\"name\":\"patterns\"},\"name\":\"patterns\"},{\"configMap\":{\"name\":\"main-pipeline\"},\"name\":\"main-pipeline-config\"}]}}}}\n"] "creationTimestamp":"2019-10-29T14:58:12Z" "generation":'\x01' "labels":map["k8s-app":"logstash"] "name":"logstash" "namespace":"default" "resourceVersion":"29682023" "selfLink":"/apis/extensions/v1beta1/namespaces/default/deployments/logstash" "uid":"8781131e-fa5c-11e9-a67d-02e51885437d"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x01' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["k8s-app":"logstash"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["k8s-app":"logstash"]] "spec":map["containers":[map["command":["logstash"] "image":"docker.elastic.co/logstash/logstash:6.7.2" "imagePullPolicy":"IfNotPresent" "name":"logstash" "ports":[map["containerPort":'\u13b4' "name":"logstash" "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "volumeMounts":[map["mountPath":"/usr/share/logstash/config/" "name":"logstash-config"] map["mountPath":"/usr/share/logstash/certificate/" "name":"certificate"] map["mountPath":"/usr/share/logstash/patterns/" "name":"patterns"] map["mountPath":"/usr/share/logstash/pipeline/" "name":"main-pipeline-config"]]]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e' "volumes":[map["configMap":map["defaultMode":'\u01a4' "name":"logstash-config"] "name":"logstash-config"] map["configMap":map["defaultMode":'\u01a4' "name":"generalca"] "name":"certificate"] map["configMap":map["defaultMode":'\u01a4' "name":"patterns"] "name":"patterns"] map["configMap":map["defaultMode":'\u01a4' "name":"main-pipeline"] "name":"main-pipeline-config"]]]]] "status":map["availableReplicas":'\x01' "conditions":[map["lastTransitionTime":"2019-10-29T14:58:12Z" "lastUpdateTime":"2019-10-29T14:58:12Z" "message":"Deployment has minimum availability." "reason":"MinimumReplicasAvailable" "status":"True" "type":"Available"]] "observedGeneration":'\x01' "readyReplicas":'\x01' "replicas":'\x01' "updatedReplicas":'\x01']]} for: "logstash-deployment.yaml": unrecognized type: string

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@cben: Any news on this?

cben commented

Thanks for the reminder. I need to rebase and address feedback but keep not getting to it. At the moment I'm sick. If anyone wants to take over, go ahead.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

cben commented

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Problem persists
/reopen

@Skitionek: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

Problem persists
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

cben commented

/reopen

@cben looks like the CI robot ignored you

The issue is still not fixed. After so many years.