Store SSIDs so we can refer to them later
dmsimard opened this issue · 4 comments
(Just brainstorming by myself here)
Maybe store a json that looks like this in ~/.cache/cico/cico.cache
:
{
"<ssid>": {
"ssid": "<ssid>",
"timestamp": "<unixtimestamp>",
"hosts": "['fqdn.host1', 'fqdn.host2']"
},
"<ssid>": {
"ssid": "<ssid>",
"timestamp": "<unixtimestamp>",
"hosts": "['fqdn.host3', 'fqdn.host4']"
}
}
the CLI interface could probably implement some commands to make use of this, something like:
cico session
+--------------------------------------+-----------------+----------------------------------+
| session id | timestamp | hosts |
+--------------------------------------+-----------------+----------------------------------+
| 6ec652eb-f032-4c2a-9d5b-9ee7a9ace9d4 | Nov 20 07:00:01 | server1.cluster, server2.cluster |
| 77188cd7-29d8-4f2b-908f-27933ee64a70 | Nov 20 08:00:01 | server3.cluster, server4.cluster |
| fe62229d-3e90-4dff-9700-e5a6931dfca9 | n/a | server5.cluster, server6.cluster |
| 5bb6e164-3388-49e9-9ae9-65f374050d01 | n/a | server7.cluster |
+--------------------------------------+-----------------+----------------------------------+
cico session --local
+--------------------------------------+-----------------+----------------------------------+
| session id | timestamp | hosts |
+--------------------------------------+-----------------+----------------------------------+
| 6ec652eb-f032-4c2a-9d5b-9ee7a9ace9d4 | Nov 20 07:00:01 | server1.cluster, server2.cluster |
| 77188cd7-29d8-4f2b-908f-27933ee64a70 | Nov 20 08:00:01 | server3.cluster, server4.cluster |
+--------------------------------------+-----------------+----------------------------------+
What are we trying to achieve here ( ~ whats the user story )
@kbsingh it's mostly to decouple/defer the release of nodes to another task within Ansible.
I explained this a bit here: rdo-infra/weirdo#1
The long story short is that if something fails in a playbook, Ansible will abort. If releasing the nodes was part of that playbook, it will not release them because it failed in the middle.
Khaleesi addresses this by persisting (writing to a file) the ssid of the current session and then actually running ansible twice like so: https://github.com/redhat-openstack/khaleesi/blob/master/jenkins-jobs/builders.yaml#L63
One ansible playbook will do the test and the other will do the cleanup.
Might have found a way around this, experimenting some more. Would love to avoid having to do this until there's a formal session endpoint in Duffy :)
This isn't necessary after all, let's revisit it if/when Duffy offers a session API.