Repo for bootstrapping Ansible Tower instances.
This project uses an external credentials repository as it’s inventory source which also includes all of the variables required and the password to be used for the running of the playbooks in the project.
Once the external credentials repository has been bootstrapped with the required variables to suit your own environment, it should be used as the inventory source for executing playbooks in this project, replacing <path-to-local-credentials-project>
with the path to your local credentials project.
To build and tag the Integreatly Ansible Tower Base docker image simply run:
cd images/tower_base/ && make
To push the built image to quay.io run:
make image/push
The install_tower.yml
playbook will install Ansible Tower on a target Openshift cluster. The playbook requires the target tower environment to be specified.
-
tower_openshift_master_url
: The URL of the target Openshift cluster -
tower_openshift_username
: Cluster admin user on target Openshift cluster -
tower_openshift_password
: Password of cluster admin user on target Openshift Cluster
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/install_tower.yml -e tower_openshift_master_url=<tower_openshift_master_url> -e tower_openshift_username=<tower_openshift_cluster_admin_username> -e tower_openshift_password=<tower_openshift_cluster_admin_password> -e tower_openshift_pg_pvc_size=10Gi --ask-vault-pass
A number of default values are used when installing Ansible Tower on the target Openshift cluster, any of which can be overridden with the use of environmental variables. These default values include several password values which are assigned a default value of CHANGEME
, as can be seen below.
-
tower_openshift_project
: The name of the newly created Openshift project (default project name istower
) -
tower_version
: The version of the Ansible Tower Openshift setup project to install (default version is3.4.3
) -
tower_archive_url
: The URL of the Ansible Tower Openshift installation project archive file to be used (default URL ishttps://releases.ansible.com/ansible-tower/setup_openshift/<tower_version>;
) -
tower_admin_user
: The username required to login to the newly installed Tower instance (default username isadmin
) -
tower_admin_password
: The password required to login to the newly installed Tower instance (default password isCHANGEME
) -
tower_rabbitmq_password
: The password required to login to RabbitMQ (default password isCHANGEME
) -
tower_pg_password
: The password required to login to PostgreSQL (default password isCHANGEME
) -
tower_openshift_pg_pvc_size
: Size of Postgres persistent volume. Defaults to100Gi
which is recommended for production environments
Once the new Tower instance has successfully been installed on the target Openshift cluster, details of this environment must be placed into the <env>_tower_credentials_list.yml
file in the external_credentials_repo
project.
<env>_tower_host: 'tower.example.com'
<env>_tower_verify_ssl: False
<env>_tower_username: '<CHANGEME>'
<env>_tower_password: '<CHANGEME>'
<env>_tower_license: '<ENCRYPTED-LICENSE>'
The bootstrap.yml
playbook will run the Prerequisite, Integreatly, Cluster Create, Cluster Teardown, Validation and Notification bootstrap playbooks in succession. These individual playbooks can be run independently if required, with instructions on how to do so in the following sections. The playbook requires the target tower environment to be specified.
-
tower_environment
: The Ansible Tower environment (dev/test/qe etc.)
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/bootstrap.yml -e tower_environment=<env> --ask-vault-pass
If you also wish to bootstrap the tower instance with the OSD integreatly install workflow you need to run this play after running the bootstrap.yml play.
ansible-playbook -i <path-to-local-credentials-project>/inventories/hosts playbooks/bootstrap_osd_integreatly_install.yml -e tower_environment=<env> --ask-vault-pass
This will create a set of OSD specific workflow templates used for the install, uninstall and upgrading of RHMI on OSD.
In order to support a large number of running jobs concurrently on Ansible Tower, it’s important to ensure that the necessary resources have been configured.
All jobs on Tower are run from the Task Execution container named ansible-tower-celery
. When looking to assign additional resources to Tower jobs, it’s this container that needs to be updated with new limits.
By default, the ansible-tower-celery
container has set limits of 1500
millicores CPU and 2Gi
Memory. To update these limits, edit the ansible-tower
stateful set and modify existing limits, see example snippet below:
name: ansible-tower-celery
resources:
requests:
cpu: 1500m
memory: 2Gi
For new installations, the default limits can be overridden as part of the install using the below variables:
tower_task_mem_request
tower_task_cpu_request
ℹ️
|
There is also a limit set for the Tower namespace named tower-core-resource-limits . The default values here may need to be updated to match the set values in the steps above.
|
Ansible Tower is intelligent enough to limit the number of jobs executed based on set limits. These limits are determined using algorithms for both CPU and Memory, see official docs for full details:
This repo is configured to run automated tests using prow when a pr is created.
One of these is an e2e test. If you want to run this test locally before pushing a pr you can do that by taking the below steps.
export OPENSHIFT_MASTER=<master-host>
export TOWER_OPENSHIFT_USERNAME=<openshift-user>
export TOWER_OPENSHIFT_PASSWORD=<openshift-password>
export TOWER_LICENSE='<valid-tower-license-with-eula-accepted-value>'
export TOWER_USERNAME=admin
export TOWER_PASSWORD=<tower-password>