Node NotReady Disruption Controller
Opened this issue · 2 comments
Description
What problem are you trying to solve?
Sometimes nodes just become NotReady
for a variety of reasons (bad cloud provider instance, non-responsive kubelet, etc). When a Node has been in a Ready
state and then transitions into NotReady
, I think that Karpenter should have another Disruption Controller that monitors for these nodes and terminates them.
Third party controllers like the Spot.io Ocean Product, and the Cluster Autoscaler both handle nodes that become NotReady
for you automatically. Karpenter should be able to do the same thing.
(Note we have also raised this with our AWS TAM via a support ticket, and we were recommended to open a feature-request here)
Related: #1573
How important is this feature to you?
This is actually a blocker for us migrating off of our current tools - we launch enough nodes and we have enough failures throughout the day that we cannot fully migrate unless we have a completely automated self healing system where these nodes get cycled out once they become NotReady
.
(separate but related, is the ongoing discussion at bottlerocket-os/bottlerocket#4075 about EKS nodes becoming unready due to heavy memory pressure)
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
This issue is currently awaiting triage.
If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
This feature is also important to us as we have been running into this case a few times now where a node gets stuck in NotReady
and require manual intervention. We've been slowly migrating our test envs from spotio to karpenter and this is currently stopping us from going to production.
We would also be interested in helping to implement this feature.