/openshift-hybridizer

All in One Openshift Cluster Hybrid Cloud Provisioner

Primary LanguageShellApache License 2.0Apache-2.0

OpenShift Hybridizer

The Ansible scripts that can be used to provision an Hybrid Cloud Environment and generate the required Ansible scripts to deploy All In One OpenShift cluster on to it.

Currently supported Cloud Providers:

  • Azure(azr)

  • Amazon(aws)

  • Google Cloud Platform (gcp)

Supports only OpenShift 3.10 or above

Sources

The sources of these scripts can be downloaded from GitHub.

Lets clone the sources git clone https://github.com/redhat-developer-demos/openshift-hybridizer to a directory on local file system. For convenience we shall refer to the sources clone directory as $PROJECT_HOME.

Pre-Requisites

Installer Image

The installer image is built from Ansible Runner with need Ansible Cloud modules which are required to provision the cloud resources. The provisioned Cloud resources can then be used to deploy All In One OpenShift cluster.

The installer image is available at docker.io/kameshsampath/ansible-runner, to pull it run the command:

$ docker pull docker.io/kameshsampath/ansible-runner

Create Hybrid Cloud Instances

Preparation

Rename $PROJECT_HOME/env/extravars.example to $PROJECT_HOME/env/extravars, this file will be used to configure your cloud keys and other ansible facts.

The following Cloud Provider specific sections will detail more on the variables that can defined in extravars.

📎
The $PROJECT_HOME/env/extravars follows YAML convention.
Variable Name Description Default value Example

clouds

The public cloud(s)where to provision

Currently supported values are azr, aws and gcp.

clouds:
 - gcp
 - azr
 - aws

The example configures provisioning of three clouds: AWS, Azure and Google Cloud Platform

instance_name

The compute instance name that will be assigned

openshift-all-in-one

gcp_rollback

Delete all Google Cloud Platform resources that were provisioned

False

azure_rollback

Delete all Azure resources that were provisioned

False

aws_rollback

Delete all Amazon Web Services resources that were provisioned

False

Amazon Web Services

Provisioning

The provisioning consists of two parts:

  • Provision Cloud Resources

  • Deploying OpenShift

Cloud Resources

$ ./provision.sh

DeProvisioning

The undeploying of Cloud Resources are controlled by three main variables that are defined in env/extravars

gcp_rollback: False # #(1)
azure_rollback: False # #(2)
aws_rollback: False # # (3)
  1. Set to True to undeploy GCP resources

  2. Set to True to undeploy Azure resources

  3. Set to True to undeploy AWS resources

$ ./deprovision.sh
📎

For easier explanation further sections in the document assumes you have provisioned for gcp

Connecting to OpenShift Node

The following commands shows how you can connect to the provisioned instance via ssh:

$ cd $PROJECT_HOME/out/gcp
$ ./connect.sh

The connect.sh script also holds information about the public IP, the ssh user and the private key to be used.

Deploy OpenShift

After successful provisioning of Cloud Resources, there should be one directory per cloud created under $PROJECT_HOME/out.

e.g. The following shows the directory tree for Azure and GCP

 out
   |-azr
   |---inventory (1)
   |-----host_vars (2)
   |- connect.sh (3)
   |- host_prepare.yaml (4)
   |- deploy.sh (5)
   |- docker-storage-setup (6)
   |- add_openshift_users.yaml (7)
   |- add-openshift-users.sh (8)
   |- openshift_users.yaml (9)
   |-gcp
   |---inventory
   |-----host_vars
   |- connect.sh
   |- host_prepare.yaml
   |- deploy.sh
   |- docker-storage-setup
   |- add_openshift_users.yaml (7)
   |- add-openshift-users.sh (8)
   |- openshift_users.yaml (9)
  1. The cloud specific Ansible Inventory directory

  2. host_vars, the Ansible host variables for the cloud provider

  3. The SSH connect utility, this has the IP address of the OpenShift

  4. The Cloud Host OpenShift Deployment preparation tasks

  5. The OpenShift Deploy script

  6. The Docker storage setup file

  7. The Ansible playbook to add users who will have access to the OpenShift Web Console

  8. The utility script to run the add_openshift_users play

  9. openshift_users.yaml the users that need to be added/modified/deleted from OpenShift users file

e.g. Lets say you want to deploy OpenShit on to your Google Cloud Platform(gcp), run the following command:

$ cd $PROJECT_HOME/out/gcp
$ ./deploy.sh

Add Users to OpenShift

There are no users created by default with OpenShift installation, this section details on how to add new users.

The OpenShift installed is by default configured to use HTPasswd as the identity provider, with HTPasswd identity provider, the default htpasswd file is /etc/origin/master/htpasswd.

The following section details on how to add/update/remove users from the htpasswd file to allow users access to the OpenShift Web Console.

The out/<cloud>/openshift_users.yaml has two variables defined:

openshift_users - a list of dict/hash with keys username and an optional password, if password is omitted a random 8 letter password will be generated

e.g.

openshift_users:
    - {username: "developer",password: "supers3cret"}
    - {username: "demo"}  # # (1)
  1. in this case the password for the user demo in this case will be generated

openshift_delete_users - a list of usernames that needs to be removed or deleted from OpenShift users htpasswd file e.g.

openshift_delete_users:
    - developer # # (1)
  1. the user developer will be deleted from the OpenShift users htpasswd file

After you have defined the users, run the following command:

$ cd $PROJECT_HOME/out/gcp
$ ./add-openshift-users.sh

Adding Admin User to OpenShift

Follow the steps defined above to add a new user called admin with the password of your choice, to provide the user admin with Cluster Admin Privileges you might need to login to the node and execute the following commands:

$ cd $PROJECT_HOME/out/gcp
$ ./connect.sh
$ sudo -i
$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin