The goal of this repository is to provide a simple, reproducible way to deploy an OpenShift Container Platform lab using the vSphere UPI method with Static IP Addresses.
This is a concise summary of everything you need to do to use the repo. Rest of the document goes into details of every step.
- Edit
group_vars/all.yml
, the following must be changed while the rest can remain the same- pull secret
- ip and host/domain names
- enable/disable fips mode
- vcenter details
- datastore name
- datacenter name
- username and passwords of admin/service accounts
- validate current Govc version is set
- enable ntp with their details, as required
- If you wish to run a specific Channel and Version modify the following in
group_vars/all.yml
:- download.channel
- download.version
- For the CoreDNS VM to be able to pull the image from Quay.io you must specify an
coredns_vm.upstream_dns
. It cannot have itself as a primary DNS Server
-
vSphere ESXi and vCenter 6.7U3 or 7.0 installed.
-
A datacenter created with a vSphere host added to it, a datastore exists and has adequate capacity
-
Ansible (preferably latest) on the machine where this repo is cloned.
- Before you install Ansible, install the
epel-release
, runyum -y install epel-release
- Before you install Ansible, install the
-
Your DNS Provider (PiHole, AdGuard, etc) should be configured to lookup your
base_domain
from yourcoredns_vm.ipaddr
- Optionally, you configure the
coredns_vm.upstream_dns
to be your primary DNS Server and then configure your workstation or bastion host to use the CoreDNS Server as your primary DNS Server. - If you wish to use the CoreDNS as your primary DNS Server see the deploy-aio-lab.yml Extra Variables section below.
- Optionally, you configure the
-
If you plan to deploy the DNS or LoadBalancer using this playbook you need to be running an OS with a glibc version higher than 2.32 such as Fedora 33 or higher or you can deploy from the ocp4-vsphere-deploy-container
NOTE: If you are going to use the CoreDNS vm as your primary DNS Server you must specify your vcenter in group_vars/all.yml as an IP address as no A Record will exist.
Pre-populated entries in group_vars/all.yml are ready to be used unless you need to customize further. Any updates described below refer to group_vars/all.yml unless otherwise specified.
- Get the pull secret from here. Update the file on the line with location of your
pull_secret
. ex. ~/openshift/pull-secret.json - Get the vCenter details:
- IP address
- Service account username (can be the same as admin)
- Service account password (can be the same as admin)
- Admin account username
- Admin account password
- Datacenter name (created in the prerequisites mentioned above)
- Datastore name
- Absolute path of the vCenter folder to use (optional). If this field is not populated, its is auto-populated and points to
/${vcenter.datacenter}/vm/${infraID}
- Downloadable link to
govc
(vSphere CLI, pre-populated) - OpenShift cluster
- base domain (pre-populated with example.com)
- cluster name (pre-populated with ocp4)
- network type *(pre-populated with OpenShiftSDN)
- If you wish to install without enabling the Kubernetes vSphere Cloud Provider (Useful for mixed installs with both Virtual Nodes and Bare Metal Nodes), change the
provider:
tonone
in all.yaml.config: provider: none base_domain: example.com ...
- If you wish to enable custom NTP servers on your nodes, set
ntp.custom
toTrue
and definentp.ntp_server_list
to fit your requirements.ntp: custom: True ntp_server_list: - 0.rhel.pool.ntp.org - 1.rhel.pool.ntp.org
# Deploy the Lab and all components
ansible-playbook deploy-aio-lab.yml
config_local_dns=true
- Configures /etc/resolv.conf or systemd-resolved to use CoreDNS as primary DNS after CoreDNS has been deployed.
skip_ova=true
- Skips downloading and deploying the OVA if previous deployed to vCenter.
skip_lb=true
- Skips deploying the LoadBalancer VM if a LoadBalancer already exists.
skip_dns=true
- Skips deploying a DNS server if proper DNS is already configured.
specific_version=4.6.z
- Deploys a specific version of OpenShift. Must be in 4.x.z format.
# Destroy the Lab and all components
ansible-playbook destroy-aio-lab.yml -e cluster=true
# Destroy the Lab and all components while retaining the ova
ansible-playbook destroy-aio-lab.yml -e cluster=true -e skip_ova=true
# Destroy the Lab and all components and revert DNS Configuration
ansible-playbook destroy-aio-lab.yml -e cluster=true -e config_local_dns=true
cluster=true
- Required to delete the entire cluster, ova, folder, and vwware tag.
skip_ova=true
- Skips downloading and deploying the OVA if previous deployed to vCenter.
bootstrap=true
- To delete the bootstrap VM
NOTE: The OVA file needs to be outside of the Cluster Folder in VMware for -e skip_ova=true
to work
- Necessary Linux packages installed for the installation
- Necessary folders [bin, downloads, downloads/ISOs, install-dir] created
- OpenShift client, install and .ova binaries downloaded to the downloads folder
- Unzipped versions of the binaries installed in the bin folder
- In the install-dir folder:
- master.ign and worker.ign
- Copy of the install-config.yaml
- A folder is created in the vCenter under the mentioned datacenter and the template is imported
- The template file is edited to carry certain default settings and runtime parameters common to all the VMs
- VMs (coredns, lb, bootstrap, master0-2, worker0-2) are generated in the designated folder and (in state of) poweredon
# In the root folder of this repo run the following commands
export KUBECONFIG=$(pwd)/install-dir/auth/kubeconfig
export PATH=$(pwd)/bin:$PATH
# OpenShift Client Commands
oc whoami
oc get co
-
Specify the worker_prefix in
group_vars\all.yml
under the config section. -
Add a new line to
group_vars\all.yml
underworker_vms
.Ex.
- { name: "worker3", ipaddr: "10.0.0.26", cpus: "2", memory_mb: "8192", disk_size_gb: "120"}
-
Running
add-new-nodes.yml
will add all additional new worker nodes and redeploy CoreDNS and the HAProxy LB with the new node information.- To not redeploy the CoreDNS vm add the extra variable
skip_dns=true
- To not redeploy the HAProxy LB vm add the extra variable
skip_lb=true
- To not redeploy the CoreDNS vm add the extra variable
-
If you choose to redeploy the HAProxy LB vm you can scale the ingress controller by following these steps: Scaling an Ingress Controller
Vijay Chintalapati, Mike Allmen, and all the contributors to ocp4-vsphere-upi-automation Repo that inspired this repository.
Morgan Peterman