urllib3.exceptions.ResponseNotChunked when using resource.watch
larsks opened this issue · 8 comments
I wanted to watch for changes to a ConfigMap
. I tired the following code:
>>> import kubernetes
>>> import openshift.dynamic
>>> oc = openshift.dynamic.DynamicClient(kubernetes.config.new_client_from_config())
>>> cm = oc.resources.get(api_version='v1', kind='ConfigMap')
>>> check = cm.get(name='updateconfig', namespace='default')
>>> check.metadata.name
'updateconfig'
>>> for change in cm.watch(name='updateconfig', namespace='default'):
... print(change)
...
But that fails with the following traceback:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lars/.cache/pypoetry/virtualenvs/installplan-approver-VR2p4iYm-py3.9/lib/python3.9/site-packages/openshift/dynamic/client.py", line 175, in watch
for event in watcher.stream(
File "/home/lars/.cache/pypoetry/virtualenvs/installplan-approver-VR2p4iYm-py3.9/lib/python3.9/site-packages/kubernetes/watch/watch.py", line 159, in stream
for line in iter_resp_lines(resp):
File "/home/lars/.cache/pypoetry/virtualenvs/installplan-approver-VR2p4iYm-py3.9/lib/python3.9/site-packages/kubernetes/watch/watch.py", line 56, in iter_resp_lines
for seg in resp.read_chunked(decode_content=False):
File "/home/lars/.cache/pypoetry/virtualenvs/installplan-approver-VR2p4iYm-py3.9/lib/python3.9/site-packages/urllib3/response.py", line 742, in read_chunked
raise ResponseNotChunked(
urllib3.exceptions.ResponseNotChunked: Response is not chunked. Header 'transfer-encoding: chunked' is missing.
>>>
This is with:
- OpenShift 4.7.0
- Python 3.9.2
kubernetes
module 12.0.1openshift
module 0.12.0
It looks as if this was originally reported as #273, but that issue was closed without resolution.
Maybe the solution is to use field_selector='metadata.name=updateconfig'
instead? I'm not sure if that's going to be equivalent or not.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
I'll try to find time to try this again and confirm it's still happening.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.