invalid label key error for role and other label primitives
thoraxe opened this issue · 4 comments
there are certain kube label primitives. one example is node-role.kubernetes.io/infra
when trying to use this with k8s_facts
You get an error:
Traceback (most recent call last):
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1554148610.5706491-9273861806169/AnsiballZ_k8s_facts.py", line 113, in <module>
_ansiballz_main()
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1554148610.5706491-9273861806169/AnsiballZ_k8s_facts.py", line 105, in _ansiballz_main
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
File "/home/ec2-user/.ansible/tmp/ansible-tmp-1554148610.5706491-9273861806169/AnsiballZ_k8s_facts.py", line 48, in invoke_module
imp.load_module('__main__', mod, module, MOD_DESC)
File "/tmp/ansible_k8s_facts_payload_EQjWoB/__main__.py", line 176, in <module>
File "/tmp/ansible_k8s_facts_payload_EQjWoB/__main__.py", line 172, in main
File "/tmp/ansible_k8s_facts_payload_EQjWoB/__main__.py", line 153, in execute_module
File "/tmp/ansible_k8s_facts_payload_EQjWoB/ansible_k8s_facts_payload.zip/ansible/module_utils/k8s/common.py", line 207, in kubernetes_facts
File "/usr/lib/python2.7/site-packages/openshift/dynamic/client.py", line 73, in inner
raise api_exception(e)
openshift.dynamic.exceptions.BadRequestError: 400
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Date': 'Mon, 01 Apr 2019 19:56:51 GMT', 'Audit-Id': '094580fe-4610-49d8-842d-287d0663bbe7', 'Content-Length': '469', 'Content-Type': 'application/json', 'Cache-Control': 'no-store'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"unable to parse requirement: invalid label key \"node-role.kubernetes.io\\\\/infra\": prefix part a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","reason":"BadRequest","code":400}
Original traceback:
File "/usr/lib/python2.7/site-packages/openshift/dynamic/client.py", line 71, in inner
resp = func(self, resource, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/openshift/dynamic/client.py", line 233, in get
return self.request('get', path, **kwargs)
File "/usr/lib/python2.7/site-packages/openshift/dynamic/client.py", line 374, in request
_return_http_data_only=params.get('_return_http_data_only', True)
File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/lib/python2.7/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/lib/python2.7/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
I tried varous escaping sequences and quotes but basically got the same general error every time.
This is being invoked via ansible.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.