Consul cluster broken after Swarm manager node restarted
iromli opened this issue · 1 comments
iromli commented
Given 2 Swarm nodes (manager and worker) with consul deployed to each node, we have interesting usecases:
Usecase 1
If manager node stopped, consul
container in worker node able to serve request.
Usecase 2
If manager node back again, any consul
container unable to serve request. Reading from the log, the leader is gone and both consul instance can't agree on which one being the new leader. Both are claiming they are the leader.
My initial thoughts are this happen because the quorum is established. I'm proposing to deploy 3 instances (1 per Swarm node).
iromli commented
New multi-hosts example uses 3 nodes of Consul to achieve safe quorum.