Failed to launch cirros instance
Aledangelo opened this issue · 6 comments
I have Centos VM (downloaded from link present in INSTALL.md
file) with 16GB RAM and Openstack was installed successfully. There are some more details:
- Hypervisor
[osboxes@osboxes ~(keystone_admin)]$ openstack hypervisor list
+----+---------------------+-----------------+-----------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------+-------+
| 1 | osboxes | QEMU | 10.0.2.15 | up |
+----+---------------------+-----------------+-----------+-------+
- Flavor
[osboxes@osboxes ~(keystone_admin)]$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 42 | m1.nano | 128 | 0 | 0 | 1 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
| 84 | m1.micro | 128 | 0 | 0 | 1 | True |
+----+-----------+-------+------+-----------+-------+-----------+
- Compute service
[osboxes@osboxes ~(keystone_admin)]$ openstack compute service list
+----+------------------+---------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+---------+----------+---------+-------+----------------------------+
| 6 | nova-conductor | osboxes | internal | enabled | up | 2023-03-10T16:25:21.000000 |
| 7 | nova-scheduler | osboxes | internal | enabled | up | 2023-03-10T16:25:23.000000 |
| 8 | nova-consoleauth | osboxes | internal | enabled | up | 2023-03-10T16:25:22.000000 |
| 9 | nova-compute | osboxes | nova | enabled | up | 2023-03-10T16:25:19.000000 |
+----+------------------+---------+----------+---------+-------+----------------------------+
- Network
[osboxes@osboxes ~(keystone_admin)]$ openstack network list
+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| 3ec65053-0c73-4b97-92f4-72517f433047 | public | 59f4f2fe-599d-45c2-9fcc-26dc1ddf1bde |
| f56e6f48-b08f-44d1-8788-0eb1c38e5134 | private | 4ee45618-410c-4678-85c4-8d6ff8093513 |
+--------------------------------------+---------+--------------------------------------+
- I created a keypair using openstack
[osboxes@osboxes ~(keystone_admin)]$ openstack keypair list
+------------+-------------------------------------------------+
| Name | Fingerprint |
+------------+-------------------------------------------------+
| cirros-key | 4f:84:43:49:9f:d1:ae:b2:01:5c:e0:ce:83:36:78:ff |
+------------+-------------------------------------------------+
- Security group
[osboxes@osboxes ~(keystone_admin)]$ openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+---------+------------------------+----------------------------------+
| 56415af6-4031-44f9-b014-494f03f25544 | default | Default security group | c3059803be424de6b81b8052ef659e62 |
| b135e158-7efd-4365-9a37-bd1e6fc28d3c | default | Default security group | fc4403f0564e4b49963e332fdd98891f |
| d932968d-b4a8-49a5-a3b7-1419e46937ee | default | Default security group | |
+--------------------------------------+---------+------------------------+----------------------------------+
- Images
[osboxes@osboxes ~(keystone_admin)]$ openstack image list
+--------------------------------------+--------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------+--------+
| 7f04050c-d83a-4f17-aaf1-8049c6317057 | cirros | active |
| 0e444cf3-ce43-47a0-89d0-98a04ef67742 | cirros-uec | active |
| 146d4a6b-ad1e-4d9f-8b08-98eae3c3dab4 | cirros-uec-kernel | active |
| 0b50e2e5-1440-4654-b568-4e120ddf28c1 | cirros-uec-ramdisk | active |
| 63bd4710-5ef8-4e37-af5c-b0b9ad51e485 | cirros-uec_alt | active |
+--------------------------------------+--------------------+--------+
- Nova services are correctly running
[osboxes@osboxes ~(keystone_admin)]$ sudo systemctl status openstack-nova*
...
I ran this command in order to create a cirros isnstance
[osboxes@osboxes ~(keystone_admin)]$ openstack server create --image cirros --flavor m1.micro --key-name cirros-key --security-group 56415af6-4031-44f9-b014-494f03f25544 --nic net-id=public --availability-zone nova cirros-test
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | otzsbiW59838 |
| config_drive | |
| created | 2023-03-10T16:22:18Z |
| flavor | m1.micro (84) |
| hostId | |
| id | d19cbd60-8e2f-4713-bee7-bf48f133d90b |
| image | cirros (7f04050c-d83a-4f17-aaf1-8049c6317057) |
| key_name | cirros-key |
| name | cirros-test |
| progress | 0 |
| project_id | fc4403f0564e4b49963e332fdd98891f |
| properties | |
| security_groups | name='56415af6-4031-44f9-b014-494f03f25544' |
| status | BUILD |
| updated | 2023-03-10T16:22:18Z |
| user_id | 2628d68cd9a94c639bb330f6cd65696b |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
[osboxes@osboxes ~(keystone_admin)]$ openstack server list
+--------------------------------------+-------------+--------+----------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+----------+--------+----------+
| d19cbd60-8e2f-4713-bee7-bf48f133d90b | cirros-test | BUILD | | cirros | m1.micro |
+--------------------------------------+-------------+--------+----------+--------+----------+
[osboxes@osboxes ~(keystone_admin)]$ openstack server list
+--------------------------------------+-------------+--------+----------+--------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+----------+--------+----------+
| d19cbd60-8e2f-4713-bee7-bf48f133d90b | cirros-test | ERROR | | cirros | m1.micro |
+--------------------------------------+-------------+--------+----------+--------+----------+
Inspecting the instance after failing I optained this
[osboxes@osboxes ~(keystone_admin)]$ openstack server show cirros-test
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | instance-00000009 |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | error |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| config_drive | |
| created | 2023-03-10T16:22:18Z |
| fault | {u'message': u'No valid host was found. There are not enough hosts available.', u'code': 500, u'details': u' File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 555, in build_instances\n context, spec_obj, instance_uuids)\n File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 636, in _schedule_instances\n request_spec, instance_uuids)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 638, in wrapped\n return func(*args, **kwargs)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 52, in select_destinations\n instance_uuids)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method\n return getattr(self.instance, __name)(*args, **kwargs)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 33, in select_destinations\n instance_uuids)\n File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 137, in select_destinations\n return cctxt.call(ctxt, \'select_destinations\', **msg_args)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 169, in call\n retry=self.retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 123, in _send\n timeout=timeout, retry=retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 566, in send\n retry=retry)\n File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 557, in _send\n raise result\n', u'created': u'2023-03-10T16:22:30Z'} |
| flavor | m1.micro (84) |
| hostId | |
| id | d19cbd60-8e2f-4713-bee7-bf48f133d90b |
| image | cirros (7f04050c-d83a-4f17-aaf1-8049c6317057) |
| key_name | cirros-key |
| name | cirros-test |
| project_id | fc4403f0564e4b49963e332fdd98891f |
| properties | |
| status | ERROR |
| updated | 2023-03-10T16:22:30Z |
| user_id | 2628d68cd9a94c639bb330f6cd65696b |
| volumes_attached | |
+-------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
After that, I tried to solve this issue:
- Installing libvirt clients
- Changing
virt_type
in/etc/nova/nova.conf
from qemu to kvm - Trying to increment the
allocation_ratio
- Giving all openstack component profiles (Nova, Keystone, ...) the permissions to access libvirt socket with this command:
sudo setfacl -m user:<(i.e. nova)>:rw /var/run/libvirt/libvirt-sock
Inspecting logs file (DEBUG mode), I found the same error message printed in openstack server show cirros-test
, but there weren't other information.
Do you run into any issues by executing the workload in fault-free conditions?
[user@domain ARTIFACT_PATH]$ sudo ./src/run_ff.sh
The script executes a set of operations (including the server creation) by calling the start_workload.sh script.
We tried this script just now and it didn't run into any issue; we also verified that the instance was active with openstack server list
(after source-ing the user rc file). We also noticed that we couldn't see any instance when doing openstack server list
when authenticating as admin. Could the problem be that we were performing actions as admin?
The cirros
pre-built image used by OpenStack Pike is known to not work well. Please refer to the raw image in our repo (cirros), which can be used to create a working image.
You can use the following to create a new OpenStack image (as we do in workload scripts):
openstack image create --public --disk-format qcow2 --container-format bare --file cirros-0.4.0-x86_64-disk.img cirros_work
Moreover, as you can notice in the start_workload.sh file, we have two different files for authentication: admin_keystonrc_file_name, and keystonrc_file_name
We switch from user to admin (and vice-versa) depending on the operation we aim to perform. Give a look at the script.
Overall, this is not an issue in our repo. For any other issues you are running in, please contact us via email.
Yes, in fact we too realized that this is not an issue in your repo. Thank you so much for the support.
Thanks for the support