keycloak_quarkus role restart all node at the same times
roumano opened this issue · 1 comments
SUMMARY
If a configuration change, actually the keycloak_quarkus role restart all node at the same time.
So the keycloak service is down for this (small) period.
It's also introduce a problem if try to apply a bad configuration :
all node will be change and restart so the whole keycloak service will be down until a correction is done
as exemple, if i change the ssl file to a not existing file (or permission issue or ... ) with :
keycloak_quarkus_key_file: "/etc/ssl/private/not_existing_file.key.pem"
All keycloak node will be down, so the service will be down ...
I think, we should, at least, introduce a throttle
or forks
on restart service
Or even better apply change on first node before other node
Personally i like how restart have been implemented on this role : https://github.com/mrlesmithjr/ansible-mariadb-galera-cluster/blob/master/tasks/setup_cluster.yml :
- it's not use notify but
register: "_mariadb_galera_cluster_reconfigured"
- then apply change to first node only :
- name: setup_cluster | cluster rolling restart - apply config changes (first node)
ansible.builtin.include_tasks: manage_node_state.yml
- then (if successfully restart) restart other :
- name: setup_cluster | cluster rolling restart - apply config changes (other nodes)
ansible.builtin.include_tasks: manage_node_state.yml
So with this solution it's resolving both issue i've describe earlier
ISSUE TYPE
- Bug Report
ANSIBLE VERSION
core 2.14.1
COLLECTION VERSION
middleware_automation.keycloak 2.1.0
STEPS TO REPRODUCE
- change a configuration like on
keycloak_quarkus_frontend_url
- run the playbook with the role
middleware_automation.keycloak.keycloak_quarkus
EXPECTED RESULTS
- all keycloak service need to be restarted one after the other
ACTUAL RESULTS
- all keycloak service will restart at the same time and without any error handling
ADDITIONAL INFORMATION
also, i think start.yml
and restart.yml
need to be merge ( state: restarted
is always doing a state: started
and more ) and the "Wait until {{ keycloak.service_name }} becomes active {{ keycloak.health_url }}" need also to be used on the restart behavior.
Thanks for reporting; we already have similar logic for keycloak role, not yet ported to keycloak_quarkus. Ideally we would like to support custom restart orchestration (provided by users from the calling playbooks) in addition to the default; but yeah implementation is in the roadmap for both throttled restarts and wait_for_healthy conditions.