openshift-logging redeploy-certificates playbook throws an error during the check_es_health stage
samcrutt9900 opened this issue · 4 comments
Description
Running the openshift-logging redeploy-certificates playbook throws an error during the check_es_health stage.
Version
ansible 2.7.8
openshift-ansible-3.11.318-2.git.1.da17c54.el7.noarch
Steps To Reproduce
- Run the playbook openshift-logging/redeploy-certificates.yml
Expected Results
The playbook to complete
Observed Results
check_es_health.yaml throws the following error
(1, '\r\n{"changed": true, "end": "2021-03-30 09:41:36.446530", "stdout": "", "cmd": ["oc", "get", "pod", "-l", "component=es,provider=openshift", "-n", "openshift-logging", "-o", "jsonpath={.items[?(@.status.phase==\\"Running\\")].metadata.name}"], "failed": true, "delta": "0:00:00.372033", "stderr": "error: the server doesn\'t have a resource type \\"pod\\"", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": false, "_raw_params": "oc get pod -l component=es,provider=openshift -n openshift-logging -o jsonpath={.items[?(@.status.phase==\\\\\\"Running\\\\\\")].metadata.name}", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2021-03-30 09:41:36.074497", "msg": "non-zero return code"}\r\n'
Additional Information
red Hat Enterprise Linux Server release 7.9 (Maipo)
I think it is because the check_es_health.yaml file doesn't provide a config file to the oc command?
https://github.com/openshift/openshift-ansible/blob/release-3.11/playbooks/openshift-logging/private/check_es_health.yaml#L3
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.