"No package matching 'podman' found available, installed or updated
willr3 opened this issue · 3 comments
I tried to run Jetski for the SCALE lab but the deployment is failing with:
TASK [shared-labs-prep : Install packages] **************************************************************
Friday 07 August 2020 07:17:24 -0400 (0:00:00.047) 0:07:57.670 *********
fatal: [f24-h16-000-r630.rdu2.scalelab.redhat.com]: FAILED! => {"changed": false, "msg": "No package matching 'podman' found available, installed or updated", "rc": 126, "results": ["No package matching 'podman' found available, installed or updated"]}
When I ssh into that host directly
[root@f24-h16-000-r630 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.7 (Maipo)
I changed the following in all.yaml
cloud_name: cloudN #the correct cloud number for my allocation
ansible_ssh_pass: 12345 #the same password I use on my luggage
ansible_ssh_key: "{{ ansible_user_dir }}/.ssh/core_rsa" #the comments say to not change this but we need to use a shared key
version: "4.5.4"
pullSecret: '{scaryJson}'
foreman_url: https://foreman.rdu2.scalelab.redhat.com/ #from http://pastebin.test.redhat.com/890421
rebuild_provisioner: true #changed this to try and force the host to 8.1 instead of 7.7
#worker_count: 0 #I commented this out to get all the remaining hosts to be workers
Is there something I am missing in the setup to get the host to correctly install 8.1 so it can run podman?
Am I correct that I needed to comment out worker_count
to get all the hosts to act as workers?
Thank you for raising this issue
I tried to run Jetski for the SCALE lab but the deployment is failing with:
TASK [shared-labs-prep : Install packages] ************************************************************** Friday 07 August 2020 07:17:24 -0400 (0:00:00.047) 0:07:57.670 ********* fatal: [f24-h16-000-r630.rdu2.scalelab.redhat.com]: FAILED! => {"changed": false, "msg": "No package matching 'podman' found available, installed or updated", "rc": 126, "results": ["No package matching 'podman' found available, installed or updated"]}
When I ssh into that host directly
[root@f24-h16-000-r630 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
I changed the following in all.yaml
cloud_name: cloudN #the correct cloud number for my allocation ansible_ssh_pass: 12345 #the same password I use on my luggage ansible_ssh_key: "{{ ansible_user_dir }}/.ssh/core_rsa" #the comments say to not change this but we need to use a shared key version: "4.5.4" pullSecret: '{scaryJson}' foreman_url: https://foreman.rdu2.scalelab.redhat.com/ #from http://pastebin.test.redhat.com/890421 rebuild_provisioner: true #changed this to try and force the host to 8.1 instead of 7.7 #worker_count: 0 #I commented this out to get all the remaining hosts to be workersIs there something I am missing in the setup to get the host to correctly install 8.1 so it can run podman?
Enabling rebuild_provisioner
should be all that you need to make sure the provisioner node is deployed with RHEL 8. Can you check your earlier ansible output for the status of the tasks from the bootstrap
role's 20_reprovision_nodes.yml
tasks?
Am I correct that I needed to comment out
worker_count
to get all the hosts to act as workers?
This is correct. Commenting out worker_count
should result in all nodes except those reserved for the provisioner node and the master nodes being used as workers.
In roles/bootstrap/tasks/10_load_inv.yml
:
- name: Set worker count
set_fact:
worker_count: "{{ ocp_node_content.nodes | length - 3 }}"
when: worker_count is not defined
You can reach us internally in #forum-kni-perfscale on the CoreOS Slack instance for troubleshooting help.
I ran the script before the allocation was ready, I re-ran it after the cluster was ready and did not see an error