Response is not chunky enough
mmazur opened this issue · 8 comments
I've tried to use a simple resource.watch()
as per the docs, and this is what I've gotten for my troubles:
(k8s 1.13.3; openshift client both 0.8.6 and current master; urllib3 1.24.1)
The full traceback is:
Traceback (most recent call last):
File "/tmp/ansible_kubevirt_pvc_payload_1mvq554w/__main__.py", line 437, in main
module.execute_module()
File "/tmp/ansible_kubevirt_pvc_payload_1mvq554w/__main__.py", line 430, in execute_module
result = self._wait_for_creation(resource)
File "/tmp/ansible_kubevirt_pvc_payload_1mvq554w/__main__.py", line 370, in _wait_for_creation
for event in resource.watch(name=self.name, namespace=self.namespace, timeout=self.params.get('wait_timeout')):
File "/home/mmazur/.local/lib/python3.7/site-packages/openshift/dynamic/client.py", line 312, in watch
timeout_seconds=timeout
File "/home/mmazur/.local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 130, in stream
for line in iter_resp_lines(resp):
File "/home/mmazur/.local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/home/mmazur/.local/lib/python3.7/site-packages/urllib3/response.py", line 647, in read_chunked
"Response is not chunked. "
urllib3.exceptions.ResponseNotChunked: Response is not chunked. Header 'transfer-encoding: chunked' is missing.
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"access_modes": [
"ReadWriteOnce"
],
"annotations": null,
"api_key": null,
"cdi_source": null,
"cert_file": null,
"context": null,
"force": false,
"host": null,
"key_file": null,
"kubeconfig": null,
"labels": null,
"merge_type": null,
"name": "pvc1",
"namespace": "default",
"password": null,
"resource_definition": null,
"selector": null,
"size": "20Mi",
"ssl_ca_cert": null,
"state": "present",
"storage_class_name": null,
"username": null,
"verify_ssl": null,
"volume_mode": null,
"volume_name": null,
"wait": true,
"wait_timeout": 300
}
},
"msg": "Response is not chunked. Header 'transfer-encoding: chunked' is missing."
}
And the culprit is name=self.name
. Without that I'm not getting that error.
Where is that name=self.name
coming from? It looks like the */watch/*
endpoint has been deprecated in favor of a watch
query parameter on a list operation, using a fieldSelector
on name to filter the response, it's possible the kubernetes python client is assuming this pattern. I'll dig into this a bit, it may be we should remove name as a parameter (or automatically translate it to a fieldSelector
)
This is inside a KubernetesRawModule
–derived object, so self.name
and self.namespace
are just taken from from self.params
.
Whether or not you drop name
, I think adding uid
might be useful. If not as an arg to watch()
at least as an example with the field_selector
in the docs.
@fabianvf I've gotten a report on this ⬆️ and there's no tweak I can do in my code about it: https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/kubevirt/kubevirt_vm.py#L282
What would an approach to solving this permanently look like? Is it possible that this is due to the bugreporter using kubernetes-8.0.1 and not 9.0+?
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.