Repo for bootstrapping Ansible Tower instances.
- 1. Credentials repository
- 2. Building Images
- 3. Ansible Tower Installation
- 4. Bootstrapping
- 5. Contributing
This project uses an external credentials repository as it's inventory source which also includes all of the variables required and the password to be used for the running of the playbooks in the project.
Once the external credentials repository has been bootstrapped with the required variables to suit your own environment, it should be used as the inventory source for executing playbooks in this project, replacing <path-to-local-credentials-project>
with the path to your local credentials project.
To build and tag the Integreatly Ansible Tower Base docker image simply run:
cd images/tower_base/ && make
To push the built image to quay.io run:
make image/push
To build and tag the Ansible Tower Bootstrap docker image simply run:
cd images/tower_bootstrap/ && make
To push the built image to quay.io run:
make image/push
The install_tower.yml
playbook will install Ansible Tower on a target Openshift cluster. The playbook requires the target tower environment to be specified.
tower_openshift_master_url
: The URL of the target Openshift cluster
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/install_tower.yml -e tower_openshift_master_url=<tower_openshift_master_url> -e tower_openshift_pg_pvc_size=10Gi --ask-vault-pass
A number of default values are used when installing Ansible Tower on the target Openshift cluster, any of which can be overridden with the use of environmental variables. These default values include several password values which are assigned a default value of CHANGEME
, as can be seen below.
tower_openshift_project
: The name of the newly created Openshift project (default project name istower
)tower_version
: The version of the Ansible Tower Openshift setup project to install (default version is3.4.3
)tower_archive_url
: The URL of the Ansible Tower Openshift installation project archive file to be used (default URL ishttps://releases.ansible.com/ansible-tower/setup_openshift/<tower_version>
)tower_admin_user
: The username required to login to the newly installed Tower instance (default username isadmin
)tower_admin_password
: The password required to login to the newly installed Tower instance (default password isCHANGEME
)tower_rabbitmq_password
: The password required to login to RabbitMQ (default password isCHANGEME
)tower_pg_password
: The password required to login to PostgreSQL (default password isCHANGEME
)tower_openshift_pg_pvc_size
: Size of Postgres persistent volume. Defaults to100Gi
which is recommended for production environments
Once the new Tower instance has successfully been installed on the target Openshift cluster, the host name of the new Tower instance must be placed into the tower_host
variable value, which is located in the tower.yml
file in the external_credentials_repo
project.
<env>_tower_host: 'tower.example.com'
To allow the Cluster Provision job to run successfully we need to allow the ansible task runner to access the /tmp directory
- Login to the tower instance
- Click settings in the bottom left
- Click
Jobs
- Enter
/tmp
into thePATHS TO EXPOSE TO ISOLATED JOBS
box and clicksave
The bootstrap.yml
playbook will run the Prerequisite, Integreatly, Cluster Create and Cluster Teardown bootstrap playbooks in succession. These individual playbooks can be run independently if required, with instructions on how to do so in the following sections. The playbook requires the target tower environment to be specified.
tower_environment
: The Ansible Tower environment (dev/test/qe etc.)
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/bootstrap.yml -e tower_environment=<tower-environment> --ask-vault-pass
If you also wish to bootstrap the tower instance with the OSD integreatly install workflow you need to run this play after running the bootstrap.yml play.
ansible-playbook -i inventories/hosts playbooks/bootstrap_osd_integreatly_install.yml -e tower_environment=<tower-environment>
This will create the resources necessary to use the Integreatly_Bootstrap_and_install_[OSD] workflow which will install Integreatly on a targeted OSD cluster.
Prior to running any jobs stored in this repository, the target Ansible tower instance must first be bootstrapped with some generic resources. The playbook requires the target tower environment to be specified.
tower_environment
: The Ansible Tower environment (dev/test/qe)
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/bootstrap_tower.yml -e tower_environment=<tower-environment>
The bootstrap_integreatly.yml
playbook will bootstrap a target Ansible Tower instance with all resources required to execute a workflow that allows end users to install Integreatly against a specified Openshift cluster. Note: This is currently limited to clusters that are provisioned via the Tower cluster provision workflow.
There are no additional parameters required by default:
ansible-playbook -i <path-to-local-credentials-project>/hosts playbooks/bootstrap_integreatly.yml --ask-vault-pass
Following the bootstrapping of Integreatly resources, a new workflow named Integreatly Install Workflow
should be available from the Tower console:
The workflow requires the following parameters to be specified before running:
Cluster Name
: The name/ID of the Openshift cluster to targetOpenshift Master URL
: The Public URL of the Openshift MasterCluster Admin Username
: The username of a cluster-admin account on the target Openshift clusterCluster Admin Password
: The password of the cluster admin user account specifiedGIT URL
: The URL of the target Integreatly installer Git repositoryGIT Ref
: Git reference for Integreatly installer repositoryUser Count
: The number of users to pre-seed the Integreatly environment withSelf Signed Certs
: Set tofalse
by default. Set totrue
if the target Openshift cluster uses self signed certificates
Following the bootstrapping of Integreatly resources, a new workflow named Integreatly Uninstall Workflow
should be available from the Tower console:
The workflow requires the following parameters to be specified before running:
Cluster Name
: The name/ID of the Openshift cluster to targetOpenshift Master URL
: The Public URL of the Openshift MasterCluster Admin Username
: The username of a cluster-admin account on the target Openshift clusterCluster Admin Password
: The password of the cluster admin user account specifiedGIT URL
: The URL of the target Integreatly installer Git repositoryGIT Ref
: Git reference for Integreatly installer repository
Once the tower bootstrapping has been run you can bootstrap the cluster create resources. To create all the resources necessary to run a cluster create you must run the bootstrap_cluster_create.yml
playbook. The playbook doesn't take any extra variables so the command to run is:
ansible-playbook -i <path-to-local-credentials-project>/hosts playbooks/bootstrap_cluster_create.yml --ask-vault-pass
Once the cluster provision resources have been bootstrapped, a new workflow named Provision Cluster
should be available from the Tower console:
The workflow requires the following parameters to be specified before running:
Cluster Name
: The name/ID of the Openshift cluster to targetAWS Region
: The region to create the cluster inDomain Name
: The domain name to be used to create the clusterAWS Account Name
: The name of the AWS account to be used to create the cluster
Once the tower bootstrapping has been run you can bootstrap the cluster deprovision resources. To create all the resources necessary to run a cluster deprovision you must run the bootstrap_cluster_teardown.yml
playbook. The playbook doesn't take any extra variables so the command to run is:
ansible-playbook -i <path-to-local-credentials-project>/hosts playbooks/bootstrap_cluster_teardown.yml -ask-vault-pass
Once the cluster deprovision resources have been bootstrapped, a new workflow named Deprovision Cluster
should be available from the Tower console:
The workflow requires the following parameters to be specified before running:
Cluster Name
: The name/ID of the Openshift cluster to targetAWS Region
: The region that the cluster resides inDomain Name
: The cluster domain nameAWS Account Name
: The name of the AWS account used to create the cluster
Please open a Github issue for any bugs or problems you encounter.