kubernetes-retired/kube-aws

kube-aws-0.14.2 Unable create stack, with aws-iam-authenticator enabled

flah00 opened this issue · 4 comments

It seems like the IAM worker role arn, referred to in control plane, is errantly imported from the network stack. This prevents the control plane stack from creating successfully.

kube-aws output

+00:09:11       CREATE_FAILED                           k9s-zoo-Controlplane-1UGL3AD71MJC2 "No export named k9s-zoo-Network-1C91342Y0NUYU-nodepool1IAMRoleWorkerArn found"
+00:09:21       CREATE_FAILED                           Controlplane               "Embedded stack arn:aws:cloudformation:us-east-1:221645429527:stack/k9s-zoo-Controlplane-1UGL3AD71MJC2/1d7b6c10-0bb7-11ea-bcfc-0e9c8848c400 was not successfully created: No export named k9s-zoo-Network-1C91342Y0NUYU-nodepool1IAMRoleWorkerArn found"
+00:09:21       CREATE_FAILED                           k9s-zoo                    "The following resource(s) failed to create: [Controlplane]. "
Error: error updating cluster: Stack creation failed: CREATE_FAILED : The following resource(s) failed to create: [Controlplane].

Printing the most recent failed stack events:
CREATE_FAILED AWS::CloudFormation::Stack k9s-zoo The following resource(s) failed to create: [Controlplane].
CREATE_FAILED AWS::CloudFormation::Stack Controlplane Embedded stack arn:aws:cloudformation:us-east-1:221645429527:stack/k9s-zoo-Controlplane-1UGL3AD71MJC2/1d7b6c10-0bb7-11ea-bcfc-0e9c8848c400 was not successfully created: No export named k9s-zoo-Network-1C91342Y0NUYU-nodepool1IAMRoleWorkerArn found

kube-aws/clusters/k9s-zoo/exported/stacks/control-plane/stack.json

- rolearn: ",{"Fn::ImportValue":{"Fn::Sub":"${NetworkStackName}-nodepool1IAMRoleWorkerArn"}},"\n

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.