Failure in controller booting up - v0.14.0-rc1
leowmjw opened this issue · 6 comments
Was failing my previous attempt to upgrade from v0.13.1 to v0.14.0-rc1 so tried a fresh install; but it just got worse! Got a failure in controller in even booting up.
How to debug failure of the controller to start up?
Cloudformation does not show much ..
UPDATE_IN_PROGRESS | AWS::AutoScaling::AutoScalingGroup | Controllers | Failed to receive 2 resource signal(s) for the current batch. Each resource signal timeout is counted as a FAILURE.
Note that with the same cluster.yaml; at least the cluster boot up and i can run basic kubectl commands (there had other problems like canal pod crash-looping)
Hi, you would need to ssh onto the controllers and have a look at the state of their systemd units. install-kube-system
and cfn-signal
are the two most important ones in debugging the progress of kube-aws boot.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.