Lack of content types in patchNamespacedDeploymentCall
kozjan opened this issue · 6 comments
Describe the bug
I am migrating from client version v19.0.0.
When using AppsV1Api::patchNamespacedDeploymentCall, I get an error:
io.kubernetes.client.openapi.ApiException: Message:
HTTP response code: 415
HTTP response body:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "415: Unsupported Media Type",
"reason": "UnsupportedMediaType",
"details": {},
"code": 415
}
I see that v19.0.0 defines more content types:
final String[] localVarContentTypes = {
"application/json-patch+json", "application/merge-patch+json", "application/strategic-merge-patch+json", "application/apply-patch+yaml"
};
compared to v20.0.0:
final String[] localVarContentTypes = {
"application/json"
};
Is there any reason for this?
Client Version
v20.0.0
Kubernetes Version
1.24
Java Version
Java 8
To Reproduce
Steps to reproduce the behavior:
- execute an APIpatchNamespacedDeploymentRequest
Expected behavior
K8s accepts the request without problems.
KubeConfig
If applicable, add a KubeConfig file with secrets redacted.
Server (please complete the following information):
- OS: [e.g. Linux]
- Environment [e.g. container]
- Cloud [e.g. Azure]
Additional context
Add any other context about the problem here.
you should use PatchUtils
you should use
PatchUtils
This works as a workaround, but in fact this really is a bug in the client.
Looks related to #3106
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.