Restart of single node vernemq cluster triggers a cluster-leave scenario
Closed this issue ยท 2 comments
Hello.
I've discovered that restarting a single node vernemq broker, for example using kubectl rollout restart sts/vernemq
leads to a cluster-leave scenario, thus losing all stored messages from persistent sessions in the broker.
Seems that this is a scale down scenario. Leaving cluster.
11:43:40.322 [info] all queues offline: 0 draining, 1 offline, 100 msgs
docker-vernemq version that exhibits this issue is 1.13 and this problem does not occur in 1.12.3.
I've located the source of this issue which is this line, which outputs the name of the located single active pod with an extra space in the end. Then, in the next line, the comparison with the actual pod name returns false and fails to enter the desired single-node case where the broker just stops and does not perform cluster leave.
Thank you in advance.
@vkalekis that's obviously not the idea, so I'm adding a bug label. And it seems you already identified the cause (thanks for your help!)
I'll see to have this fixed in a new official release as soon as possible.
๐ Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
๐ Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.
@vkalekis PR open with an attempt to fix. Maybe you have time to test.
๐ Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
๐ Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.