Goldpinger is too sensitive to autoscaler activity
dharmab opened this issue · 5 comments
Describe the bug
The order of operations for removing a node in Kubernetes is:
- Cluster autoscaler removes VM from the underlying cloud provider
- Node object enters NotReady state since the Kubelet process stops reporting
- Cloud Controller eventually notices that the VM is gone and removes the Node object
The time between 2 and 3 can be quite long (many minutes in some clouds). Goldpinger continuously tries to reach the node during this time causing spikes in Goldpinger metrics.
To Reproduce
Steps to reproduce the behavior:
- Overscale a cluster in a cloud with long deletion times such as Azure
- Allow cluster to scale down
- Observe peer failures in Goldpinger metrics and logs
Expected behavior
Goldpinger should provide a mechanism to filter out NotReady nodes from metric queries to focus on Nodes which are expected to be functioning normally.
Screenshots
Here's an example showing Goldpinger error rates spike as a cluster scaled down over a period of hours.
Environment (please complete the following information):
- Operating System and Version: N/A
- Browser [e.g. Firefox, Safari] (if applicable): N/A
Additional context
Add any other context about the problem here.
Hello people!
We have the same issue ^^ but filtering out a node because of status NotReady
is not correct as there are cases where node instead of being marked as NotReady because of scale-down, could be labeled NotReady
as legitimately corrupted.
So we better focus on filtering out nodes on different label/attribute like SchedulingDisabled
or smth. What do you think?
Hello!
Any updates on this one?