This role will deploy OCP4 on a kvm host. It was develop to support the Qubinode Installer.
This role depends on Red Hat IdM as the DNS server. PR's are welcome to support other DNS servers.
Please see example playbook.
Roles:
- name: Deploy OpenShift 4.x Cluster
hosts: localhost
become: yes
vars:
local_user_account: admin
ocp4_version: 4.3.0
ocp4_dependencies_version: "{{ ocp4_version[:3] }}"
ocp4_image_version: "{{ ocp4_version[:3] + '.0' }}"
installation_working_dir: /home/admin/qubinode-installer
pull_secret: "{{ installation_working_dir }}/pull-secret.txt"
vm_public_key: "/home/{{ local_user_account }}/.ssh/id_rsa.pub"
openshift_install_folder: ocp4
openshift_install_dir: "{{ installation_working_dir }}/{{ openshift_install_folder }}"
ignition_files_dir: "{{ openshift_install_dir }}"
ssh_ocp4_public_key: "{{ lookup('file', vm_public_key) }}"
podman_webserver: qbn-httpd
rhcos_webserver_img_name: rhcos-webserver
dest_ignitions_web_directory: "{{ webserver_directory }}/{{ ocp4_dependencies_version }}/ignitions/"
webserver_directory: /opt/qubinode_webserver
webserver_dependencies: "{{ webserver_directory }}/{{ ocp4_dependencies_version }}"
webserver_images: "{{ webserver_directory }}/{{ ocp4_dependencies_version }}/images"
coreos_installer_kernel: "rhcos-{{ ocp4_image_version }}-x86_64-installer-kernel"
coreos_installer_initramfs: "rhcos-{{ ocp4_image_version }}-x86_64-installer-initramfs.img"
coreos_metal_bios: "rhcos-{{ ocp4_image_version }}-x86_64-metal.raw.gz"
openshift_mirror: http://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{ ocp4_dependencies_version }}/{{ ocp4_image_version }}
coreos_tmp_dir: /tmp/build_coreos_container
tear_down: false
virtinstall_dir: "{{ installation_working_dir }}/rhcos-install/"
internal_domain_name: lunchnet.example
external_domain_name: "{{ internal_domain_name }}"
ocp4_cluster_domain: "cloud.{{ internal_domain_name }}"
idm_server_shortname: qbn-dns01
user_idm_admin: admin
user_idm_password: "password"
idm_server_ipaddr: 192.168.11.1
dns_teardown: false
idm_dns_forward_zone: "{{ ocp4_cluster_domain }}"
idm_dns_reverse_zone: "50.168.192.in-addr.arpa."
idm_server_fqdn: qbn-dns01.lunchnet.example
dns_wildcard: "*.apps.{{ cluster_name }}"
nat_gateway: "192.168.50.1"
localstorage_version: '4.3'
localstorage_filesystem: true
localstorage_block: false
localstorage_mount_path: /dev/vdc1
environment:
IPA_HOST: "{{idm_server_shortname}}.{{ internal_domain_name }}"
IPA_USER: "{{ user_idm_admin }}"
IPA_PASS: "{{ user_idm_password }}"
tasks:
- name: run the role ocp4-kvm-deployer
import_role:
name: ocp4-kvm-deployer
The role has three main tasks file:
- pre_tasks.yml
- tear_down.yml
- deploy.yml
- Sets up global variables for load balancer contaier
- Ensure the jq command is installed
The tasks is responsible for tearing down the OCP cluster. It will remove all DNS entries, remove the VMs, remove the containers and any other files created by the role.
Example Usage:
# Teardown the entire cluster
ansible-playbook rhcos.yml -e "tear_down=true"
# Remove just the webserver container
ansible-playbook rhcos.yml -e "tear_down=true" -t webserver
Dependancy roles:
- openshift-4-loadbalancer
- ocp4-bootstrap-webserver
- check for ocp4 pull secret and fail if it's not found
- ensure the kvm host firewall ports are open
- install_podman.yml: ensure the kvm host is setup to run podman containers
- ocp4_tools.yml: ensure the ocp4 tools such as the oc command and the openshift installer are installed
- create_ignitions.yml: generate ignitions files based on the number of masters and workers specified for this cluster
- deploy the container load balancer
- deploy the container webserver that servers up the files for bootstrap
- download_rhcos_files.yml: download the files required to bootstrap rhcos
- create directory to store the generated virt-install scripts for rhcos
- deploy a generated treeinfo that defines the location of the kernel and ramdisks
- deploy_libvirt_network.yml: create libvirt nat network for rhcos vms
- configure_dns_entries.yml: populate Red Hat IdM with the dns entries for OCP4
- build_cluster_nodes_profile.yml: generate the virt-install scripts for building the rhcos VMs
- build_vm_list.yml: create the variable ocp4_nodes with a list of all the ocp4 required for the cluster
- deploy the ocp4 rhcos node vms
- wait_for_vm_shutdown.yml: OCP nodes shutdown after the initial boostrap, this tasks waits for all the VMs to shutdown
- start up the RHCOS vms to continue the OCP deployment
- run /usr/local/bin/openshift-install wait-for bootstrap-complete
- contine install if bootstrap returns
It is now safe to remove the bootstrap resources
- get all running VMS
- shutdown the bootstrap node
- wait_for_vm_shutdown.yml: wait for the bootstrap node to shutdown
- destroy the bootstrap node
Configure nfs provisioner
ansible-playbook playbooks/deploy_ocp4.yml --start-at-task="Waiting for Installation to Complete"
Example Usage:
ansible-playbook rhcos.yml -t setup
ansible-playbook rhcos.yml -t tools
ansible-playbook rhcos.yml -t podman
ansible-playbook rhcos.yml -t podman --skip-tags pkg
ansible-playbook rhcos.yml -t ignitions
ansible-playbook rhcos.yml -t lb
ansible-playbook rhcos.yml -t webserver
ansible-playbook rhcos.yml -t download
ansible-playbook rhcos.yml -t libvirt_net
ansible-playbook rhcos.yml -t node_profile
ansible-playbook rhcos.yml -t idm
ansible-playbook rhcos.yml -t rhcos
ansible-playbook rhcos.yml -t webserver
ansible-playbook rhcos.yml -t lb
ansible-playbook rhcos.yml -t libvirt_net
ansible-playbook rhcos.yml -t deploy_vms
ansible-playbook rhcos.yml -t nfs
ansible-playbook rhcos.yml -t lcoalstorage
Dependancy roles:
- openshift-4-loadbalancer
- nfs-provisioner-role
BSD
An optional section for the role authors to include contact information, or a website (HTML is not allowed).