AWS: Newly autoscaled worker-nodes not added to the targets of Network Loadbalancer
Closed this issue · 3 comments
Hello,
We are experiencing an issue that is essentially a duplicate of kubernetes/cloud-provider-aws#824
Kops version: 1.28.4
Steps to reproduce: Provision a network loadbalancer in AWS using ingress-nginx helm chart. Terminate one of the nodes
Expected result: After the node rejoins the cluster, it should be registered as a target for the loabalancer
Actual result: The node is not registered as a target instance
The solution is to upgrade the version of registry.k8s.io/provider-aws/cloud-controller-manager
to v1.28.5
(or guessing greater should work too once available)
At the moment we are able to achieve this by updating the cluster config as follows:
cloudControllerManager:
image: registry.k8s.io/provider-aws/cloud-controller-manager:v1.28.5
I am not exactly sure what the side-effects of the workaround might be, but it is a serious problem if the newly registered node cannot register as an NLB target.
I am not exactly sure what the side-effects of the workaround might be, but it is a serious problem if the newly registered node cannot register as an NLB target.
The only side-effect is that it's hard to remember to remove it on next cluster upgrade.
Should be fixed in the next 1.28 or 1.29 stable release.
@hakman: Closing this issue.
In response to this:
Fixed in #16524 and will be available in the next kOps 1.28 release.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.