Scale down of masters results in unavailable cluster
cehoffman opened this issue · 2 comments
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Cluster went unavailable after a scale down of master pods.
What you expected to happen:
Reducing the number of masters should result in a new minimum master count applied to all nodes before the scaled down masters are terminated.
How to reproduce it (as minimally and precisely as possible):
Create a cluster with 3 masters (minimum 2). Add 3 more masters (bad i know, but was for a simple configuration change so should be quick). After 3 new masters are up and functional, scale old set to 0.
Anything else we need to know?: ES 6.3.1
Environment:
- Kubernetes version (use
kubectl version
): 1.9.6 - Cloud provider or hardware configuration**: Azure
- Install tools:
- Others:
I worked around this using the cluster settings api
http put localhost:9200/_cluster/settings\?flat_settings=true persistent:='{"discovery.zen.minimum_master_nodes": 2}'
The above fix doesn't persist restarts and will result in an unavailable cluster.