This project has been created to deploy VM infrastructure on top of OpenStack that later can be consumed by new ephemeral TripleO OpenStack deployments. It is all being developed with TripleO .. so both master OpenStack deployment as well as children OpenStack deployments are done based on TripleO project .. hence the name hextupleO This tool uses combination of ansible with shade libraries as well as some php and apache server for dashboard (alternatively ansible core or ansible tower can be used).
HextupleO Demo: https://www.youtube.com/embed/NPZon911V5A
Project is still under development. It's fully functional, but might still have bugs that I don't take responsibility of. Use at your own discretion.
Check-out for trello board tracking new feature and progress overall: https://trello.com/b/Hhz1A3hl/hextupleo-project
This project has been recently enhanced with ability to automatically deploy wit following front ends:
- core (default) - uses ansible core and all the nested openstack deployments will occur by executing ansible-playbook -e variables.yml playbook.yml
- http - uses custom php based web portal for executing nested openstacks - one advantage is ability to track all the deployments in on place
- tower - uses ansible tower front end for nested openstack execution - can also be further integrated with Cloudforms
- RHEL7 VM connected to repos
- Nested KVM enabled on Openstack compute
- Jumbo Frame support for the Tenant Network in Overcloud
- Local rpm repository for OSP and Ceph
- 2x Provider Networks in Overcloud (external)
Create a node (example RHEL VM) that will have access to RPM repositories, your master OpenStack deployment on both Public and Admin endpoints.
This could look the same as your undercloud 1x External net + 1 PXE net
This RHEL7 VM should be connected to RHSM repos - preferable latest OSP repos.
If you plan on using that VM as a local repo server, please edit 'repo_server'
accordingly and run include script (make_local_repos.sh)
The master OpenStack deployment have to be configured with nested-kvm enabled .. please reference this example if you don't know how to configure that:
https://github.com/jonjozwiak/openstack/tree/master/director-examples/nested-virt-on-nova
Setting up Jumbo frames for the Overcloud has been described in here:
https://access.redhat.com/solutions/2521041
Build a local repository that includes following repos for all OSP version that you are planning to deploy:
[rhel-7-server-extras-rpms]
[rhel-7-server-openstack-11-devtools-rpms]
[rhel-7-server-openstack-11-optools-rpms]
[rhel-7-server-openstack-11-rpms]
[rhel-7-server-openstack-11-tools-rpms]
[rhel-7-server-optional-rpms]
[rhel-7-server-rh-common-rpms]
[rhel-7-server-rhceph-2-mon-rpms]
[rhel-7-server-rhceph-2-osd-rpms]
[rhel-7-server-rhceph-2-tools-rpms]
[rhel-7-server-rpms]
You might want to include OSP10 as well.
The two provider networks are going to be used for 2 roles:
- providing external IP to nested undercloud - and in the future for any supporting roles that require external ip like Cloudforms, Ansible Tower, Satellite etc.
- External IP and Floating IPs to nested Overcloud Controllers
Right not at least the first External network should also be configured with 'External' neutron flag enabled
Let's ensure variables is config files match our environment:ssh-keygen -t rsa ssh-copy-id localhost yum -y install ansible git git clone https://github.com/OOsemka/hextupleo.git cd hextupleo vi vars/install-vars.yml
# Overcloud Admin User. Can be typically found inside overcloudrc file cloud_admin: admin # Overcloud Admin User password. Can be typically found inside overcloudrc file admin_password: Passw0rd # Overcloud Admin Tenant. Can be typically found inside overcloudrc file admin_project: admin # Overcloud public keystone endpoint. Can be typically found inside overcloudrc file os_auth: https://openstack.home.lab:13000/v2.0 # NTP server that will be reachable in nested overcloud ntp_server: 172.31.8.1 # DNS servers which will be reachable from nested overcloud dns_server1: 172.31.8.1 dns_server1: 172.31.8.2 # First out of 2 external networks. This one is used mainly for undercloud and supporting roles external_net: provider2 # Pre-build local rpm repository to OSP and Ceph repo_server: http://172.31.8.1/repos/ # Specify how do you want to consume hextupleO - options are (tower, core, http) - core default deployment_type: core # If your environment requires proxy for outside connectivity, please use the 3 variables below # , otherwise, comment out what's not need but -leave- 'proxy_env' abnd 'no_proxy' defined. proxy_env: http_proxy: http://User101:MyPassw0rd@proxy.internet.company.com:8080 https_proxy: http://User101:MyPassw0rd@proxy.internet.company.com:8080 no_proxy: localhost.localdomain shade_env: PYTHONWARNINGS: "ignore:Certificate has no, ignore:A true SSLContext object is not available, ignore:Certificate for"# Flavors used for nested OpenStack roles flavors: undercloud: name: undercloud ram: 16384 disk: 100 vcpu: 4 controller: name: overcloud-controller ram: 12288 disk: 60 vcpu: 2 compute: name: overcloud-compute ram: 12288 disk: 60 vcpu: 4 ceph: name: overcloud-ceph ram: 4096 disk: 50 vcpu: 2 hci: name: overcloud-hci ram: 16384 disk: 60 vcpu: 4 rhel-pre: name: rhel-small ram: 4096 disk: 10 vcpu: 2
Finally let's configure the second provider network that will be consumed by Nested OpenStack controllers for External APIs and Floating IPs:
# cat files/networks.csv #id,network,cidr,gateway,firstip,lastip,cf1,reservedby id1,provider1,172.31.6.0/24,172.31.6.254,172.31.6.100,172.31.6.109 id2,provider1,172.31.6.0/24,172.31.6.254,172.31.6.110,172.31.6.119 id3,provider1,172.31.6.0/24,172.31.6.254,172.31.6.120,172.31.6.129 id4,provider1,172.31.6.0/24,172.31.6.254,172.31.6.130,172.31.6.139 id5,provider1,172.31.6.0/24,172.31.6.254,172.31.6.140,172.31.6.149 id6,provider1,172.31.6.0/24,172.31.6.254,172.31.6.150,172.31.6.159
First column is a primary key of this local file database. It has to start with "id" and end with the number
Second column is a pre-defined external/provider network that he been already created in OpenStack (master overcloud)
Next we split large /24 network into smaller chunks that will be used to separate nested OpenStacks.
Please ensure network.csv stays in files directory and the field are delimited by comma (,)
NOTE: If you decided to deploy it under Tower, please ensure vars/tower_cli.cfg has your Tower credentials
We should be able to execute installation playbook:
ansible-playbook install.ymlNow depends on how you set the installation parameter, you could either start deploying nested openstacks in 3 different ways:
- via ansible core (playbook from cli)
- via custom http webform (http://localhost/hextupleO)
- via ansible Tower
For core:
Edit deploy-vars.yml file with # of node types you want to deploy in nested OpenStack and execute:
ansible-playbook htplO-build-all.yml -e @deploy-vars.yml
For http:
Open your web browser to a RHEL server external IP address with directory /hextupleO and decide how to build your nested OpenStack (http://external-ip//hextupleO)
For Tower:
Log on to Tower Web-ui and the new TEmplates for creating and destroying nested OpenStack should be already there.
Enjoy!