getting Fail openshift_kubelet_name_override for new hosts in ansible logs
Closed this issue · 11 comments
Description
I am getting this error
PLAY [Fail openshift_kubelet_name_override for new hosts]
Version
Please put the following version information in the code block
indicated below.
- Your ansible version per
ansible --version
ansible 2.6.19
If you're operating from a git clone:
-
The output of
git describe
NIL
If you're running from playbooks installed via RPM -
The output of
rpm -q openshift-ansible
openshift-ansible-3.11.146-1.git.0.fcedb45.el7.noarch
Place the output between the code block below:
VERSION INFORMATION HERE PLEASE
Steps To Reproduce
NIL
Expected Results
Openshift cluster is provisioned successfully.
Observed Results
Describe what is actually happening.
I am getting the error given below;
PLAY [Fail openshift_kubelet_name_override for new hosts]
The issue is that I have tried multiple versions of openshift ansible but still getting the same error. Although this issue has been solved. But after reading resolution details I didn't understand anything. Can you guys kindly explain how to resolve this issue?
PLAY [Fail openshift_kubelet_name_override for new hosts]
For long output or logs, consider using a gist
Additional Information
Provide any additional information which may help us diagnose the
issue.
- Your operating system and version, ie: RHEL 7.2, Fedora 23 (
$ cat /etc/redhat-release
)
Red Hat Enterprise Linux Server release 7.6 (Maipo)
- Your inventory file (especially any non-standard configuration parameters)
OSEv3:
children:
masters:
hosts:
master.openshift.local: ""
etcd:
hosts:
master1.openshift.local: ""
nodes:
hosts:
master1.openshift.local:
openshift_node_group_name: node-config-master
infra1.openshift.local:
openshift_node_group_name: node-config-infra
node-databas1.openshift.local:
openshift_node_group_name: node-config-compute-databas
vars:
ansible_become: true
ansible_ssh_user: XX
ansible_user: XX
containerized: true
timeout: 60
openshift_deployment_type: openshift-enterprise
openshift_release: "v3.11"
os_sdn_network_plugin_name: 'redhat/openshift-ovs-networkpolicy'
openshift_disable_check: X
openshift_node_groups: XXXX
openshift_master_identity_providers: XXXX
openshift_master_cluster_hostname: master.X.com
openshift_master_cluster_public_hostname: X.com
openshift_master_default_subdomain: X.com
openshift_repos_enable_testing: false
openshift_master_bootstrap_auto_approve: "true"
openshift_cloudprovider_kind: "azure"
openshift_cloudprovider_azure_location: "X"
openshift_clusterid: X
template_service_broker_install: "false"
openshift_cluster_monitoring_operator_install: "false"
openshift_logging_install_logging: "false"
openshift_logging_es_memory_limit: "1024M"
openshift_logging_es_nodeselector:
node-role.kubernetes.io/compute: "true"
openshift_cluster_admin_users:
XXXXXX
the variable openshift_kubelet_name_override
is defined in this file given below, can some kindly explain what is the purpose of this variable?
@vrutkovs @mtnbikenc need help for this issue. thank you
how to undefine the openshift_kubelet_name_override
variable?
Please include necessary information:
- full inventory
- which playbook are you running and its output with
-vvv
switch
openshift-ansible/images/installer/root/usr/local/bin/generate
This is a binary to generate openshift inventory, however its no longer being used
@vrutkovs the issue is being generated in this file. By looking at this file I came to conclusion that the error is being generated due to this condition:
- openshift_kubelet_name_override is defined
- openshift_cloudprovider_kind | default('', true) != 'azure'
on line 10 and 11.
The question that I wanna ask is that how can I undefine openshift_kubelet_name_override
and assign azure
to openshift_cloudprovider_kind
variable.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
not stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
not close