Prerequisites.yaml container fail TASK get openshift_current_version
Closed this issue · 6 comments
Description
I'm running the containerized prerequisites.yaml on a centos 7 ansible controller against 3 x centos atomic hosts.
Version
ansible 2.8.3
config file = None
configured module search path = [u'/home/vincent/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
Steps To Reproduce
- Configure ansible inventory file
- run atomic prerequisite container install:
sudo atomic install --system --storage=ostree --set INVENTORY_FILE=/etc/ansible/hosts --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml --set OPTS="-vvv" docker.io/openshift/origin-ansible:v3.11
Expected Results
Deploy OKD cluster
Observed Results
Fails at TASK [get openshift_cuirrent_version]
TASK [get openshift_current_version] *******************************************
task path: /usr/share/ansible/openshift-ansible/playbooks/init/cluster_facts.yml:10
Wednesday 07 August 2019 19:20:44 +0000 (0:00:00.113) 0:00:09.070 ******
Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py
Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py
<node2.mydomain.net> ESTABLISH SSH CONNECTION FOR USER: root
<node1.mydomain.net> ESTABLISH SSH CONNECTION FOR USER: root
<node2.mydomain.net> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r node2.mydomain.net '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<node1.mydomain.net> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r node1.mydomain.net '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py
<master.mydomain.net> ESTABLISH SSH CONNECTION FOR USER: root
<master.mydomain.net> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r master.mydomain.net '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<master.mydomain.net> (0, '\n{"invocation": {"module_args": {"deployment_type": "origin"}}, "changed": false, "ansible_facts": {}}\n', '')
<node2.mydomain.net> (0, '\n{"invocation": {"module_args": {"deployment_type": "origin"}}, "changed": false, "ansible_facts": {}}\n', '')
<node1.mydomain.net> (0, '\n{"invocation": {"module_args": {"deployment_type": "origin"}}, "changed": false, "ansible_facts": {}}\n', '')
ERROR! Unexpected Exception, this is probably a bug: update expected at most 1 arguments, got 2
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 118, in <module>
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 156, in run
result = self._tqm.run(play=play)
File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 291, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py", line 325, in run
results += self._wait_on_pending_results(iterator)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/__init__.py", line 712, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/__init__.py", line 117, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/__init__.py", line 615, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/usr/lib/python2.7/site-packages/ansible/vars/manager.py", line 626, in set_host_facts
self._fact_cache.update(host.name, facts)
TypeError: update expected at most 1 arguments, got 2
Additional Information
Provide any additional information which may help us diagnose the
issue.
CentOS Linux release 7.6.1810 (Core)
INVENTORY FILE
[OSEv3:children]
masters
nodes
etcd
[OSEv3:vars]
openshift_release='3.11'
openshift_deployment_type=origin
[masters]
master.mydomain.net
[etcd]
master.mydomain.net
[nodes]
master.mydomain.net openshift_node_group_name='node-config-master'
node1.mydomain.net openshift_node_group_name='node-config-compute'
node2.mydomain.net openshift_node_group_name='node-config-compute'
Also tried with Ansible 2.7.0, same result
I'm having a similar issue (similar Inventory file) trying a container install of the prerequisites.yml playbook. Any ideas?
Using module file /usr/share/ansible/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py
<InfraTwo.okd.example.com> ESTABLISH SSH CONNECTION FOR USER: root
<InfraTwo.okd.example.com> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=30 -o ControlPath=/root/.ansible/cp/%h-%r InfraTwo.okd.example.com '/bin/sh -c '"'"'/usr/bin/python && sleep 0'"'"''
<NodeOne.okd.example.com> (0, '\n{"invocation": {"module_args": {"deployment_type": "origin"}}, "changed": false, "ansible_facts": {}}\n', '')
ERROR! Unexpected Exception, this is probably a bug: update expected at most 1 arguments, got 2
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 118, in
exit_code = cli.run()
File "/usr/lib/python2.7/site-packages/ansible/cli/playbook.py", line 122, in run
results = pbex.run()
File "/usr/lib/python2.7/site-packages/ansible/executor/playbook_executor.py", line 156, in run
result = self._tqm.run(play=play)
File "/usr/lib/python2.7/site-packages/ansible/executor/task_queue_manager.py", line 291, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/linear.py", line 325, in run
results += self._wait_on_pending_results(iterator)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/init.py", line 712, in _wait_on_pending_results
results = self._process_pending_results(iterator)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/init.py", line 117, in inner
results = func(self, iterator, one_pass=one_pass, max_passes=max_passes)
File "/usr/lib/python2.7/site-packages/ansible/plugins/strategy/init.py", line 615, in _process_pending_results
self._variable_manager.set_host_facts(target_host, result_item['ansible_facts'].copy())
File "/usr/lib/python2.7/site-packages/ansible/vars/manager.py", line 626, in set_host_facts
self._fact_cache.update(host.name, facts)
TypeError: update expected at most 1 arguments, got 2
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.