IDR/deployment

Repo initialisation from infrastructure

Closed this issue · 9 comments

Once everything is merged into the infrastructure repo do something like:

git remote add infrastructure https://github.com/manics/infrastructure
git fetch infrastructure
git checkout infrastructure/master
git checkout -b filter

git filter-branch --tree-filter '\
    for d in roles roles-dev os-idr-playbooks idr-playbooks; do \
        mkdir -p filter; \
        test -d ansible/$d && mv ansible/$d filter/ || true; \
        done; \
    for d in ansible.cfg requirements.yml; do \
        mkdir -p filter; \
        test -f ansible/$d && mv ansible/$d filter/ || true; \
        done; \
    for d in instance keypairs network security-groups; do \
        mkdir -p filter/roles; \
        test -d ansible/roles/openstack-idr-$d && \
        mv ansible/roles/openstack-idr-$d filter/roles/ || \
        true; \
        done; \
    '

git filter-branch -f --subdirectory-filter filter

Proposed initial repo: https://github.com/manics/deployment/tree/infrastructure-idr-0-3-2

I can think of three options for getting this in:

https://github.com/manics/deployment/tree/infrastructure-idr-0-3-2 is only tested against openstack resources provisioned using a private playbook, a public example in this repo will follow in a later PR.

FYI I've got some molecule tests on the way in https://github.com/manics/deployment/tree/infrastructure-idr-0-3-2-next but it requires more work, so best to test this with the existing (private) playbooks for now.

https://github.com/manics/deployment/tree/infrastructure-idr-0-3-3 (44ba61c d3167e4 d27b51c)
This needs a bump to the requirements, but once that's done I think it can form the first release here.

diff --git a/ansible/requirements.yml b/ansible/requirements.yml
index 9d5ddca..e6f69eb 100644
--- a/ansible/requirements.yml
+++ b/ansible/requirements.yml
@@ -66,11 +66,19 @@
 - src: openmicroscopy.omero-python-deps
   version: 1.0.0

-- src: openmicroscopy.omero-server
-  version: 1.1.0
+#- src: openmicroscopy.omero-server
+#  version: 1.1.0
+#
+#- src: openmicroscopy.omero-web-runtime
+#  version: 1.0.0

-- src: openmicroscopy.omero-web-runtime
-  version: 1.0.0
+- name: openmicroscopy.omero-web-runtime
+  src: https://github.com/manics/ansible-role-omero-web-runtime
+  version: selinux-utils
+
+- name: openmicroscopy.omero-server
+  src: https://github.com/manics/ansible-role-omero-server
+  version: managedrepo-group

 - src: openmicroscopy.openstack-volume-storage
   version: 1.0.0
@@ -91,7 +99,7 @@
 #  version: 1.0.0

 - src: openmicroscopy.selinux-utils
-  version: 1.0.0
+  version: 1.0.1

 - src: openmicroscopy.storage-volume-initialise
   version: 1.0.0
@@ -118,8 +126,11 @@
 - src: IDR.openstack-idr-keypairs
   version: master

-- src: IDR.openstack-idr-network
-  version: master
+#- src: IDR.openstack-idr-network
+#  version: master
+- name: IDR.openstack-idr-network
+  src: https://github.com/manics/ansible-role-openstack-idr-network
+  version: autodetect-external

 - src: IDR.openstack-idr-security-groups
   version: master

This is the correspondence between IDR/deployment and openmicroscopy/infrastructure:

--depends-on ome/ansible-role-omero-server#3
--depends-on ome/ansible-role-omero-web-runtime#2
--depends-on IDR/ansible-role-openstack-idr-network#1

There's a partial molecule test which uses https://github.com/manics/centos-systemd-ip-docker
I don't think it's ready to go into travis yet.

Steps for testing this:

On necromancer

  • mkdir /ome/www/downloads.openmicroscopy.org/idr/omero/0.3.3-rc1
  • HISTFILE=/dev/null wget https://devspace:******@idr-ci.openmicroscopy.org:8443/job/OMERO-build/lastSuccessfulBuild/artifact/src/target/OMERO.server-5.2.3-414-b95a82e-ice36-b200.zip —no-check-certificate

management_tools/idr

  • git checkout simon/idr-demo33
  • git clone git://github.com/idr/deployment
  • or if already used: mv vendor vendor.old - since old versions would not be overwritten.
  • ansible-galaxy install -r deployment/ansible/requirements.yml
  • export OS_CLOUD=ebi
  • ansible-playbook --diff -e idr_environment=test33 deployment/ansible/idr-01-install-idr.yml -e omero_omego_additional_args='"--downloadurl http://downloads.openmicroscopy.org/idr"' -e omero_upgrade=True -e omero_release=0.3.3-rc1

In check mode:

PLAY RECAP *********************************************************************
test33-a-database          : ok=17   changed=0    unreachable=0    failed=0
test33-a-dockermanager     : ok=32   changed=1    unreachable=0    failed=0
test33-a-omero             : ok=79   changed=6    unreachable=0    failed=0
test33-database            : ok=17   changed=0    unreachable=0    failed=0
test33-management          : ok=7    changed=0    unreachable=0    failed=0
test33-omero               : ok=92   changed=8    unreachable=0    failed=0
test33-proxy               : ok=39   changed=0    unreachable=0    failed=0

Note: several of the changes are false-positives from pip.

Full run:

PLAY RECAP *********************************************************************
test33-a-database          : ok=18   changed=0    unreachable=0    failed=0
test33-a-dockermanager     : ok=32   changed=0    unreachable=0    failed=0
test33-a-omero             : ok=89   changed=9    unreachable=0    failed=0
test33-database            : ok=18   changed=0    unreachable=0    failed=0
test33-management          : ok=7    changed=0    unreachable=0    failed=0
test33-omero               : ok=102  changed=11   unreachable=0    failed=0
test33-proxy               : ok=39   changed=0    unreachable=0    failed=0

@manics : is this due to some local config that you have?

TASK [openmicroscopy.rsync-server : rsyncd | configure shares] *****************
--- before: /etc/rsyncd.conf
+++ after: dynamically generated
@@ -1,4 +1,4 @@
-# Ansible managed: /Users/spli/work/management_tools/idr/deployment/ansible/vendor/openmicroscopy.rsync-server/templates/etc-rsyncd-conf.j2 modified on 2017-01-17 16:58:21 by spli on ls31908.local
+# Ansible managed

 [global]
 timeout = 300

changed: [test33-omero]

@joshmoore No, I don't have any changes to ansible.cfg

Remaining issues are handled in #4