The sources of these scripts can be downloaded from GitHub.
Lets clone the sources git clone https://github.com/redhat-developer-demos/openshift-hybridizer
to a directory on local file system. For convenience we shall refer to the sources clone directory as $PROJECT_HOME
.
-
Docker
installed and available locally, based on your environment have native docker for linux or Docker for Mac or Docker for Windows installed -
Refer to the following documentation on what are the pre-requisites for each Cloud Provider that are currently supported:
The installer image is built from Ansible Runner with need Ansible Cloud modules which are required to provision the cloud resources. The provisioned Cloud resources can then be used to deploy All In One OpenShift cluster.
The installer image is available at docker.io/kameshsampath/ansible-runner
, to pull it run the command:
$ docker pull docker.io/kameshsampath/ansible-runner
Rename $PROJECT_HOME/env/extravars.example
to $PROJECT_HOME/env/extravars
, this file will be used to configure your cloud keys and other ansible facts.
The following Cloud Provider specific sections will detail more on the variables that can defined in extravars
.
📎
|
The $PROJECT_HOME/env/extravars follows YAML convention.
|
Variable Name | Description | Default value | Example |
---|---|---|---|
clouds |
The public cloud(s)where to provision |
Currently supported values are azr, aws and gcp. |
clouds:
- gcp
- azr
- aws The example configures provisioning of three clouds: AWS, Azure and Google Cloud Platform |
instance_name |
The compute instance name that will be assigned |
openshift-all-in-one |
|
gcp_rollback |
Delete all Google Cloud Platform resources that were provisioned |
False |
|
azure_rollback |
Delete all Azure resources that were provisioned |
False |
|
aws_rollback |
Delete all Amazon Web Services resources that were provisioned |
False |
The undeploying of Cloud Resources are controlled by three main variables that are defined in env/extravars
gcp_rollback: False # #(1)
azure_rollback: False # #(2)
aws_rollback: False # # (3)
-
Set to
True
to undeploy GCP resources -
Set to
True
to undeploy Azure resources -
Set to
True
to undeploy AWS resources
$ ./deprovision.sh
📎
|
For easier explanation further sections in the document assumes you have provisioned for |
The following commands shows how you can connect to the provisioned instance via ssh:
$ cd $PROJECT_HOME/out/gcp
$ ./connect.sh
The connect.sh
script also holds information about the public IP, the ssh user and the private key to be used.
After successful provisioning of Cloud Resources, there should be one directory per cloud created under $PROJECT_HOME/out
.
e.g. The following shows the directory tree for Azure and GCP
out
|-azr
|---inventory (1)
|-----host_vars (2)
|- connect.sh (3)
|- host_prepare.yaml (4)
|- deploy.sh (5)
|- docker-storage-setup (6)
|- add_openshift_users.yaml (7)
|- add-openshift-users.sh (8)
|- openshift_users.yaml (9)
|-gcp
|---inventory
|-----host_vars
|- connect.sh
|- host_prepare.yaml
|- deploy.sh
|- docker-storage-setup
|- add_openshift_users.yaml (7)
|- add-openshift-users.sh (8)
|- openshift_users.yaml (9)
-
The cloud specific Ansible Inventory directory
-
host_vars, the Ansible host variables for the cloud provider
-
The SSH connect utility, this has the IP address of the OpenShift
-
The Cloud Host OpenShift Deployment preparation tasks
-
The OpenShift Deploy script
-
The Docker storage setup file
-
The Ansible playbook to add users who will have access to the OpenShift Web Console
-
The utility script to run the
add_openshift_users
play -
openshift_users.yaml the users that need to be added/modified/deleted from OpenShift users file
e.g. Lets say you want to deploy OpenShit on to your Google Cloud Platform(gcp), run the following command:
$ cd $PROJECT_HOME/out/gcp
$ ./deploy.sh
There are no users created by default with OpenShift installation, this section details on how to add new users.
The OpenShift installed is by default configured to use HTPasswd as the identity provider, with HTPasswd identity provider, the default htpasswd file is /etc/origin/master/htpasswd
.
The following section details on how to add/update/remove users from the htpasswd file to allow users access to the OpenShift Web Console.
The out/<cloud>/openshift_users.yaml
has two variables defined:
openshift_users - a list of dict/hash with keys username and an optional password, if password is omitted a random 8 letter password will be generated
e.g.
openshift_users:
- {username: "developer",password: "supers3cret"}
- {username: "demo"} # # (1)
-
in this case the password for the user
demo
in this case will be generated
openshift_delete_users - a list of usernames that needs to be removed or deleted from OpenShift users htpasswd file e.g.
openshift_delete_users:
- developer # # (1)
-
the user
developer
will be deleted from the OpenShift users htpasswd file
After you have defined the users, run the following command:
$ cd $PROJECT_HOME/out/gcp
$ ./add-openshift-users.sh
Follow the steps defined above to add a new user called admin
with the password of your choice, to provide the user admin
with Cluster Admin Privileges you might need to login to the node and execute the following commands:
$ cd $PROJECT_HOME/out/gcp
$ ./connect.sh
$ sudo -i
$ oc login -u system:admin
$ oc adm policy add-cluster-role-to-user cluster-admin admin