contiv-experimental/cluster

Node commissioning failing with OVS service start timeout error

rkharya opened this issue · 10 comments

Docker - 1.11.0
UCP - 1.1.0
Ansible/Contiv-cluster - Latest clonned on 05/16

1st node commissioning/boot-strap is failing with below error -

"TASK [contiv_network : start ovs service] *************************************",
"fatal: [Docker-2-FLM19379EU8]: FAILED! =\u003e {"changed": false, "failed": true, "msg": "Job for openvswitch.service failed because a timeout was exceeded. See \"systemctl status openvswitch.service\" and \"journalctl -xe\" for details.\n"}",
"\tto retry, use: --limit @/home/cluster-admin/ansible/site.retry",
"",
"PLAY RECAP ***
*****************************************************************",
"Docker-2-FLM19379EU8 : ok=63 changed=41 unreachable=0 failed=1 ",
"",

Clusterm.service status failure message -

May 16 15:33:27 Docker-1.cisco.com clusterm[31369]: level=info msg="TASK [contiv_network : start ovs service] *************************************"
May 16 15:33:42 Docker-1.cisco.com clusterm[31369]: level=debug msg="ansible-playbook -i /tmp/hosts011576945 --user cluster-admin --private-key /home/cluster-admin/.ssh/id_rsa --extra-vars {"contiv_network_mode":"standalone","control_interface":"enp6s0","docker_device":"/dev/sdb","docker_device_size":"100000MB","docker_version":"1.11.0","env":{"HTTPS_PROXY":"http://64.102.255.40:8080","HTTP_PROXY":"http://64.102.255.40:8080","NO_PROXY":"","http_proxy":"http://64.102.255.40:8080","https_proxy":"http://64.102.255.40:8080","no_proxy":""},"fwd_mode":"bridge","netplugin_if":"enp7s0","scheduler_provider":"ucp-swarm","service_vip":"10.65.122.79","ucp_bootstrap_node_name":"Docker-2-FLM19379EU8","ucp_license_file":"/home/cluster-admin/docker_subscription.lic","ucp_version":"1.1.0","validate_certs":"false"} /home/cluster-admin/ansible//site.yml (pid: 4458) has been running for 25m0.013143761s"
May 16 15:34:42 Docker-1.cisco.com clusterm[31369]: level=debug msg="ansible-playbook -i /tmp/hosts011576945 --user cluster-admin --private-key /home/cluster-admin/.ssh/id_rsa --extra-vars {"contiv_network_mode":"standalone","control_interface":"enp6s0","docker_device":"/dev/sdb","docker_device_size":"100000MB","docker_version":"1.11.0","env":{"HTTPS_PROXY":"http://64.102.255.40:8080","HTTP_PROXY":"http://64.102.255.40:8080","NO_PROXY":"","http_proxy":"http://64.102.255.40:8080","https_proxy":"http://64.102.255.40:8080","no_proxy":""},"fwd_mode":"bridge","netplugin_if":"enp7s0","scheduler_provider":"ucp-swarm","service_vip":"10.65.122.79","ucp_bootstrap_node_name":"Docker-2-FLM19379EU8","ucp_license_file":"/home/cluster-admin/docker_subscription.lic","ucp_version":"1.1.0","validate_certs":"false"} /home/cluster-admin/ansible//site.yml (pid: 4458) has been running for 26m0.013379505s"
May 16 15:35:42 Docker-1.cisco.com clusterm[31369]: level=debug msg="ansible-playbook -i /tmp/hosts011576945 --user cluster-admin --private-key /home/cluster-admin/.ssh/id_rsa --extra-vars {"contiv_network_mode":"standalone","control_interface":"enp6s0","docker_device":"/dev/sdb","docker_device_size":"100000MB","docker_version":"1.11.0","env":{"HTTPS_PROXY":"http://64.102.255.40:8080","HTTP_PROXY":"http://64.102.255.40:8080","NO_PROXY":"","http_proxy":"http://64.102.255.40:8080","https_proxy":"http://64.102.255.40:8080","no_proxy":""},"fwd_mode":"bridge","netplugin_if":"enp7s0","scheduler_provider":"ucp-swarm","service_vip":"10.65.122.79","ucp_bootstrap_node_name":"Docker-2-FLM19379EU8","ucp_license_file":"/home/cluster-admin/docker_subscription.lic","ucp_version":"1.1.0","validate_certs":"false"} /home/cluster-admin/ansible//site.yml (pid: 4458) has been running for 27m0.013784035s"
May 16 15:36:42 Docker-1.cisco.com clusterm[31369]: level=debug msg="ansible-playbook -i /tmp/hosts011576945 --user cluster-admin --private-key /home/cluster-admin/.ssh/id_rsa --extra-vars {"contiv_network_mode":"standalone","control_interface":"enp6s0","docker_device":"/dev/sdb","docker_device_size":"100000MB","docker_version":"1.11.0","env":{"HTTPS_PROXY":"http://64.102.255.40:8080","HTTP_PROXY":"http://64.102.255.40:8080","NO_PROXY":"","http_proxy":"http://64.102.255.40:8080","https_proxy":"http://64.102.255.40:8080","no_proxy":""},"fwd_mode":"bridge","netplugin_if":"enp7s0","scheduler_provider":"ucp-swarm","service_vip":"10.65.122.79","ucp_bootstrap_node_name":"Docker-2-FLM19379EU8","ucp_license_file":"/home/cluster-admin/docker_subscription.lic","ucp_version":"1.1.0","validate_certs":"false"} /home/cluster-admin/ansible//site.yml (pid: 4458) has been running for 28m0.014131995s"
May 16 15:37:42 Docker-1.cisco.com clusterm[31369]: level=debug msg="ansible-playbook -i /tmp/hosts011576945 --user cluster-admin --private-key /home/cluster-admin/.ssh/id_rsa --extra-vars {"contiv_network_mode":"standalone","control_interface":"enp6s0","docker_device":"/dev/sdb","docker_device_size":"100000MB","docker_version":"1.11.0","env":{"HTTPS_PROXY":"http://64.102.255.40:8080","HTTP_PROXY":"http://64.102.255.40:8080","NO_PROXY":"","http_proxy":"http://64.102.255.40:8080","https_proxy":"http://64.102.255.40:8080","no_proxy":""},"fwd_mode":"bridge","netplugin_if":"enp7s0","scheduler_provider":"ucp-swarm","service_vip":"10.65.122.79","ucp_bootstrap_node_name":"Docker-2-FLM19379EU8","ucp_license_file":"/home/cluster-admin/docker_subscription.lic","ucp_version":"1.1.0","validate_certs":"false"} /home/cluster-admin/ansible//site.yml (pid: 4458) has been running for 29m0.014469116s"
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg="fatal: [Docker-2-FLM19379EU8]: FAILED! => {"changed": false, "failed": true, "msg": "Job for openvswitch.service failed because a timeout was exceeded. See \"systemctl status openvswitch.service\" and \"journalctl -xe\" for details.\n"}"
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg="\tto retry, use: --limit @/home/cluster-admin/ansible/site.retry"
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg=
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg="PLAY RECAP ***
*****************************************************************"
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg="Docker-2-FLM19379EU8 : ok=63 changed=41 unreachable=0 failed=1 "
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=info msg=
May 16 15:38:27 Docker-1.cisco.com clusterm[31369]: level=error msg="configuration failed, starting cleanup. Error: exit status 2"

Docker-2 has this error on openvswitch.service status -

[root@Docker-2 log]# journalctl -xu openvswitch.service
-- Logs begin at Sun 2016-05-15 16:47:30 IST, end at Mon 2016-05-16 15:50:01 IST. --
May 16 15:33:27 Docker-2.cisco.com systemd[1]: Starting LSB: Open vSwitch switch...
-- Subject: Unit openvswitch.service has begun start-up
-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- Unit openvswitch.service has begun starting up.
May 16 15:33:27 Docker-2.cisco.com openvswitch[5604]: /etc/openvswitch/conf.db does not exist ... (warning).
May 16 15:33:27 Docker-2.cisco.com openvswitch[5604]: install: cannot change owner and permissions of '/etc/openvswitch': No such file or directory
May 16 15:33:27 Docker-2.cisco.com openvswitch[5604]: Creating empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error: /etc/openvswitch/conf.db: failed to lock lockfile (No suc
May 16 15:33:27 Docker-2.cisco.com openvswitch[5604]: [FAILED]

May 16 15:33:27 Docker-2.cisco.com openvswitch[5604]: Inserting openvswitch module [ OK ]
May 16 15:38:27 Docker-2.cisco.com systemd[1]: openvswitch.service start operation timed out. Terminating.
May 16 15:38:27 Docker-2.cisco.com systemd[1]: Failed to start LSB: Open vSwitch switch.
-- Subject: Unit openvswitch.service has failed
-- Defined-By: systemd

-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

-- Unit openvswitch.service has failed.

-- The result is failed.
May 16 15:38:27 Docker-2.cisco.com systemd[1]: Unit openvswitch.service entered failed state.
May 16 15:38:27 Docker-2.cisco.com systemd[1]: openvswitch.service failed.

/var/log/messages has this -

[root@Docker-2 log]# cat messages | grep openvswitch
May 16 15:32:54 Docker-2 python: ansible-get_url Invoked with directory_mode=None force=False backup=None remote_src=None owner=None follow=False group=None use_proxy=True serole=None content=NOT_LOGGING_PARAMETER setype=None timeout=10 src=None dest=/tmp/openvswitch-2.3.1-1.x86_64.rpm selevel=None force_basic_auth=False sha256sum= http_agent=ansible-httpget regexp=None url_password=None url=https://cisco.box.com/shared/static/51eo9dcw04qx2y1f14n99y4yt5kug3q4.rpm checksum= seuser=None headers=None delimiter=None mode=None url_username=None validate_certs=False
May 16 15:33:19 Docker-2 python: ansible-yum Invoked with name=['/tmp/openvswitch-2.3.1-1.x86_64.rpm'] list=None disable_gpg_check=False conf_file=None install_repoquery=True state=present disablerepo=None update_cache=False enablerepo=None exclude=None
May 16 15:33:25 Docker-2 yum[5470]: Installed: openvswitch-2.3.1-1.x86_64
May 16 15:33:27 Docker-2 python: ansible-service Invoked with name=openvswitch pattern=None enabled=True state=started sleep=None arguments= runlevel=default
May 16 15:33:27 Docker-2 openvswitch: /etc/openvswitch/conf.db does not exist ... (warning).
May 16 15:33:27 Docker-2 openvswitch: install: cannot change owner and permissions of '/etc/openvswitch': No such file or directory
May 16 15:33:27 Docker-2 ovsdb-tool: ovs|00001|lockfile|WARN|/etc/openvswitch/.conf.db.lock: failed to open lock file: No such file or directory
May 16 15:33:27 Docker-2 ovsdb-tool: ovs|00002|lockfile|WARN|/etc/openvswitch/.conf.db.lock: failed to lock file: No such file or directory
May 16 15:33:27 Docker-2 openvswitch: Creating empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error: /etc/openvswitch/conf.db: failed to lock lockfile (No such file or directory)
May 16 15:33:27 Docker-2 openvswitch: [FAILED]
May 16 15:33:27 Docker-2 kernel: openvswitch: Open vSwitch switching datapath
May 16 15:33:27 Docker-2 openvswitch: Inserting openvswitch module [ OK ]
May 16 15:33:27 Docker-2 python: SELinux is preventing /usr/bin/install from write access on the directory /etc.#12#012***** Plugin catchall_labels (83.8 confidence) suggests ***************_#012#012If you want to allow install to have write access on the etc directory#012Then you need to change the label on /etc#012Do#012# semanage fcontext -a -t FILE_TYPE '/etc'#012where FILE_TYPE is one of the following: hugetlbfs_t, openvswitch_log_t, openvswitch_rw_t, openvswitch_tmp_t, openvswitch_var_lib_t, openvswitch_var_run_t, tmp_t, var_lib_t, var_log_t, var_run_t. #012Then execute: #012restorecon -v '/etc'#12#012#012_* Plugin catchall (17.1 confidence) suggests **************************#12#012If you believe that install should be allowed write access on the etc directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#12# grep install /var/log/audit/audit.log | audit2allow -M mypol#012# semodule -i mypol.pp#012
May 16 15:38:27 Docker-2 systemd: openvswitch.service start operation timed out. Terminating.
May 16 15:38:27 Docker-2 systemd: Unit openvswitch.service entered failed state.
May 16 15:38:27 Docker-2 systemd: openvswitch.service failed.
May 16 15:38:30 Docker-2 ovs-vsctl: ovs|00002|vsctl|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
May 16 15:38:30 Docker-2 ovs-vsctl: ovs|00002|vsctl|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
May 16 15:38:31 Docker-2 python: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=semanage permissive -d openvswitch_t removes=None creates=None chdir=None

Hope this logs will be helpful in de-bugging the issue.

On 2nd attempt i see different error. This time openvswitch.service is active but in failed state -

root@Docker-2 openvswitch]# systemctl status -l -n 100 openvswitch.service
● openvswitch.service - LSB: Open vSwitch switch
Loaded: loaded (/etc/rc.d/init.d/openvswitch)
Active: active (running) since Mon 2016-05-16 18:21:09 IST; 3h 19min ago
Docs: man:systemd-sysv-generator(8)
Process: 18597 ExecStart=/etc/rc.d/init.d/openvswitch start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/openvswitch.service
├─5606 /bin/sh /usr/share/openvswitch/scripts/ovs-ctl start --system-id=random
├─5607 tee -a /var/log/openvswitch/ovs-ctl.log
├─5641 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
├─5643 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
└─5644 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor

May 16 18:21:09 Docker-2.cisco.com systemd[1]: Starting LSB: Open vSwitch switch...
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: /etc/openvswitch/conf.db does not exist ... (warning).
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: install: cannot change owner and permissions of '/etc/openvswitch': No such file or directory
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: Creating empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error: /etc/openvswitch/conf.db: failed to lock lockfile (No such file or directory)
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: [FAILED]
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: ovs-vswitchd is already running.
May 16 18:21:09 Docker-2.cisco.com ovs-appctl[18623]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovsdb-server.pid: open: No such file or directory
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: Enabling remote OVSDB managers 2016-05-16T12:51:09Z|00001|daemon_unix|WARN|/var/run/openvswitch/ovsdb-server.pid: open: No such file or directory
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: ovs-appctl: cannot read pidfile "/var/run/openvswitch/ovsdb-server.pid" (No such file or directory)
May 16 18:21:09 Docker-2.cisco.com openvswitch[18597]: [FAILED]
May 16 18:21:09 Docker-2.cisco.com systemd[1]: Started LSB: Open vSwitch switch.

Its looking for ovsdb-server.pid while we have ovs-vswitchd.pid -

[root@Docker-2 openvswitch]# ls -ltr
total 4
-rw-r--r--. 1 root root 5 May 16 15:33 ovs-vswitchd.pid
srwx------. 1 root root 0 May 16 15:33 ovs-vswitchd.5644.ctl
[root@Docker-2 openvswitch]# cat ovs-vswitchd.pid
5644

Currently running ovs-vswitch have below processes running on Docker-2 -

root@Docker-2 openvswitch]# ps -eaf|grep -i ovs
root 5606 1 0 15:33 ? 00:00:00 /bin/sh /usr/share/openvswitch/scripts/ovs-ctl start --system-id=random
root 5607 1 0 15:33 ? 00:00:00 tee -a /var/log/openvswitch/ovs-ctl.log
root 5641 5606 0 15:33 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
root 5643 5641 0 15:33 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor
root 5644 5643 0 15:33 ? 00:00:00 ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log --pidfile=/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor

vvb commented

@rkharya selinux seems to be denying access to /etc. Will check more and update

[cluster-admin@Docker-2 ~]$ sudo ausearch -m avc | grep -i install
type=SYSCALL msg=audit(1463393007.295:4499): arch=c000003e syscall=83 success=no exit=-13 a0=7ffe26788f85 a1=1c0 a2=0 a3=7ffe26788140 items=0 ppid=5606 pid=5631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="install" exe="/usr/bin/install" subj=system_u:system_r:openvswitch_t:s0 key=(null)
type=AVC msg=audit(1463393007.295:4499): avc:  denied  { write } for  pid=5631 comm="install" name="etc" dev="dm-1" ino=134320257 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:etc_t:s0 tclass=dir
type=SYSCALL msg=audit(1463403069.623:7544): arch=c000003e syscall=83 success=no exit=-13 a0=7ffdddef3f85 a1=1c0 a2=0 a3=7ffdddef2c50 items=0 ppid=18599 pid=18617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="install" exe="/usr/bin/install" subj=system_u:system_r:openvswitch_t:s0 key=(null)
type=AVC msg=audit(1463403069.623:7544): avc:  denied  { write } for  pid=18617 comm="install" name="etc" dev="dm-1" ino=134320257 scontext=system_u:system_r:openvswitch_t:s0 tcontext=system_u:object_r:etc_t:s0 tclass=dir
[cluster-admin@Docker-2 ~]$
vvb commented

@rkharya @mapuri we need to do the below additional steps in case selinux is in enforcing mode,

yum install policycoreutils-python
mkdir /etc/openvswitch
semanage fcontext -a -t openvswitch_rw_t "/etc/openvswitch(/.*)?"
restorecon -Rv /etc/openvswitch

UPDATE:
It is also noticed that with the new changes we are installing openvswitch-2.3.1-1.x86_64 whereas the version that got installed from the openstack-kilo repo was openvswitch-2.3.1-2.el7.x86_64

@rkharya is trying out openvswitch-2.3.1-2.el7.x86_64 to see if it makes any difference. If it works, then the above may not be required.

vvb commented

@mapuri @rkharya
UPDATE, this turned out to be because of the version of openvswitch we are using, I tried using the 2.3.1-2.el7 and things worked as expected.

May 17 01:57:56 Docker-1 clusterm: level=info msg="TASK [contiv_network : install ovs (redhat)] ***********************************"
May 17 01:58:04 Docker-1 clusterm: level=info msg="changed: [Docker-2-FLM19379EU8]"
May 17 01:58:04 Docker-1 clusterm: level=info msg=
May 17 01:58:04 Docker-1 clusterm: level=info msg="TASK [contiv_network : download ovs binaries (debian)] *************************"
May 17 01:58:04 Docker-1 clusterm: level=info msg="skipping: [Docker-2-FLM19379EU8] => (item={u'url': u'https://cisco.box.com/shared/static/v1dvgoboo5zgqrtn6tu27vxeqtdo2bdl.deb', u'dest': u'/tmp/ovs-common.deb'}) "
May 17 01:58:05 Docker-1 clusterm: level=info msg="skipping: [Docker-2-FLM19379EU8] => (item={u'url': u'https://cisco.box.com/shared/static/ymbuwvt2qprs4tquextw75b82hyaxwon.deb', u'dest': u'/tmp/ovs-switch.deb'}) "
May 17 01:58:05 Docker-1 clusterm: level=info msg=
May 17 01:58:05 Docker-1 clusterm: level=info msg="TASK [contiv_network : install ovs-common (debian)] ****************************"
May 17 01:58:05 Docker-1 clusterm: level=info msg="skipping: [Docker-2-FLM19379EU8]"
May 17 01:58:05 Docker-1 clusterm: level=info msg=
May 17 01:58:05 Docker-1 clusterm: level=info msg="TASK [contiv_network : install ovs (debian)] ***********************************"
May 17 01:58:05 Docker-1 clusterm: level=info msg="skipping: [Docker-2-FLM19379EU8]"
May 17 01:58:05 Docker-1 clusterm: level=info msg=
May 17 01:58:05 Docker-1 clusterm: level=info msg="TASK [contiv_network : start ovs service] **************************************"
May 17 01:58:05 Docker-1 clusterm: level=info msg="changed: [Docker-2-FLM19379EU8]"

@vvb thanks for digging in.

yeah, makes sense as selinux rule that we are probably installing in ansible is specific to the openstack-kilo rpm.

And I had built the ovs rpm using the instructions hare: https://n40lab.wordpress.com/2015/01/25/centos-7-installing-openvswitch-2-3-1-lts/

Just curious, how are you installing the openstack-kilo ovs? From what I remember that repo is no longer accessible which caused us to use this custom rpm.

I think we need to do either of the following:

  • find a way to reliably install the openstack-kilo specific rpm;
  • Or, if we need to use our custom rpm then we may need to adjust the ansible task for installing selinux rule accordingly.

Is my understanding correct?

vvb commented

@mapuri so, looks like these issues were taken care of in the rpm init file itself and so they are not seen on the newer rpms. contiv/ansible#196 takes care of it.
The version was chosen based off, what I saw in the openstack kilo repo, somehow yum install openvswitch still worked on my server, not sure if it is because of the cached data.

--> Running transaction check
---> Package openvswitch.x86_64 0:2.3.1-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================================
 Package                                           Arch                                         Version                                           Repository                                            Size
=============================================================================================================================================================================================================
Installing:
 openvswitch                                       x86_64                                       2.3.1-2.el7                                       openstack-kilo                                       1.8 M

Transaction Summary
=============================================================================================================================================================================================================
Install  1 Package

ok, sounds good. I guess as long as we have the correct rpm current selinux settings in ansible will continue to work.

vvb commented

yes, that is correct

vvb commented

resolved by contiv/ansible#196