Healthmonitoring doesnt reschedule pods upon Node lost scenario
hr1sh1kesh opened this issue · 2 comments
Is this a BUG REPORT or FEATURE REQUEST?:
BUG
What happened:
Deployed stork version 1.0.2 on a kubernetes cluster.
Deploy a cassandra statefulset.
Bring down a node where one of the Cassandra pods are scheduled.
Health monitor feature should reschedule the pod since the PX node is offline but that doesn't happen.
What you expected to happen:
Stork healthmonitor should reschedule the pods to another node thereby providing autohealing.
How to reproduce it (as minimally and precisely as possible):
Deploy Cassandra with Stork 1.0.2 as the scheduler.
Stop the node which has one of the cassandra pods on it.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): 1.8.8 - Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
@hr1sh1kesh Can you attach the logs from stork.
Unable to reproduce the issue anymore.