This repository contains a set of Ansible playbooks that will spawn and configure VMs (currently supporting libvirt only) to deploy Kolla.
- Ansible
- Libvirt
- virt-manager
- Kolla image - TODO: Upload the Kolla image somewhere
- 3 NAT networks - Guide
- nat1:
192.168.122.0/24
for SSH and Ansible - nat2:
192.168.123.0/24
for OpenStack management network - nat3:
192.168.100.0/24
for Neutron networking
- nat1:
- Kolla Ansible source code
- Copy SSH key
ssh/id_kolla
to your.ssh
directory
cp ssh/id_kolla ~/.ssh
- Copy and edit the example config for aio or multinode.
- The options are documented in comments.
- Save your config as
user_config.yml
. - Edit globals.yml
- This is the configuration file of Kolla - Read the Docs
- One of the most important options is
kolla_internal_vip_address
, which should be any NOT USED address in the nat2 range, e.g.,
kolla_internal_vip_address: 192.168.123.200 # The default
- Spawn the node
ansible-playbook -i local spawn-aio.yml -e @user_config.yml
- Prepare the node
ansible-playbook -i all-in-one prepare.yml -e @user_config.yml
- SSH into the node
ssh <node-ip> -o "IdentitiesOnly=yes" -i ssh/id_kolla
- Deploy using the deploy script
./deploy
- If it fails, please refer to Kolla documentation and follow from the linked point.
- It may fail due to a network issue; in that case, just run it again.
- Spawn your VMs running this playbook
ansible-playbook -i local spawn-multinode.yml -e @user_config.yml
-
Check the IPs on the console output and in the generated
multinode
inventory file. If you see<NODE_NAME>_IP_NOT_FOUND
, then find the IP manually (through virt-manager). It might be that the node didn't start for any reason or it didn't boot fast enough before the playbook gave up retrieving IP. -
If you need to log into a node without SSH, use these credentials
username: kolla
password: hhh
- Check nat2 IPs of the nodes and set
kolla_internal_vip_address
in globals.yml to an unused IP in192.168.123.0/24
range.
- Prepare your nodes using this playbook
ansible-playbook -i multinode prepare.yml -e @user_config.yml
- SSH into your deployment node
ssh <deployment-node-ip> -o "IdentitiesOnly=yes" -i ssh/id_kolla
- You can try the deploy script which is already on the node
./deploy
- If that fails, investigate the issue. Sometimes you just have to run the script again. If it doesn't help, see the next section.
Based on Quick start for development
- A virtual environment for Kolla is sourced automatically on SSH into the node.
- A copy of Kolla Ansible is already present.
These steps can be done automatically by ./pre-deploy
script as well.
- Install Kolla dependencies
kolla-ansible install-deps
-
Known issue: sometimes
ansible-galaxy
commands fail due to network issues. Just run the command again. -
Generate certificates in case you use TLS
kolla-ansible -i multinode certificates
- Bootstrap servers
kolla-ansible -i multinode bootstrap-servers
- Run prechecks
kolla-ansible -i multinode prechecks
- Run
kolla-ansible -i multinode deploy
- If the command fails, you may have issues with Kolla (especially if you changed the playbooks) or your nodes might not have enough resources.
- You also might check your globals file, especially
network_interface
andneutron_external_interface
must be existing interfaces on all of your nodes. Check usingip a
command. However, the node XML is configured in a way that the interface names should always benetwork_interface: enp6s0
andneutron_external_interface: enp7s0
. - You might have a sort of bad luck that one of the nodes gets the same IP as
kolla_internal_vip_address
. In that case, changekolla_internal_vip_address
.
- Install OpenStack client
pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/master
- Run post-deploy jobs
kolla-ansible post-deploy
- Create example resources
./init-runonce
- Delete all nodes using
ansible-playbook -i local delete.yml -e @vm_list.yml -e @user_config