Permission error with default cloudformation: missing elasticloadbalancing:SetSecurityGroups IAM role
eromanova opened this issue · 3 comments
/kind bug
What steps did you take and what happened:
I've followed https://cluster-api-aws.sigs.k8s.io/getting-started to initialize Cluster API provider AWS on a management cluster:
- Set up admin AWS credentials to create IAM cloud formation stack (exported admin AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
- Created IAM cloudformation stack with
clusterawsadm bootstrap iam create-cloudformation-stack
. - Attached created policies to my AWS user (not an admin)
- Set up user's credentials (exported user's AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
- Run
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
and initialize the management cluster with AWS provider by runningclusterctl init --infrastructure aws
- Create a workload cluster with
spec.controlPlaneLoadBalancer.loadBalancerType: nlb
- Permission error occurs in cluster-api-provider-aws:
E0715 12:45:14.646085 1 controller.go:329] "Reconciler error" err=<
failed to apply security groups to load balancer "hmc-system-ekaz-dev-apiserver": AccessDenied: User: arn:aws:iam::643893117298:user/ekaz-hmc is not authorized to perform: elasticloadbalancing:SetSecurityGroups on resource: arn:aws:elasticloadbalancing:us-east-2:643893117298:loadbalancer/net/hmc-system-ekaz-dev-apiserver/d5c794c5369f3d20 because no identity-based policy allows the elasticloadbalancing:SetSecurityGroups action
status code: 403, request id: ba4a2814-495f-4476-8170-326ff3dfd72c
> controller="awscluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSCluster" AWSCluster="hmc-system/ekaz-dev" namespace="hmc-system" name="ekaz-dev" reconcileID="1433ad0b-a075-42b6-9781-a413bd7526c7"
What did you expect to happen:
Workload cluster to deploy successfully
Anything else you would like to add:
AWSCluster spec:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: ekaz
spec:
region: us-east-2
controlPlaneLoadBalancer:
healthCheckProtocol: TCP
loadBalancerType: nlb
Should we add elasticloadbalancing:SetSecurityGroups
role to the default cloudformation templates or did I misconfigure something? Or this is probably an expected behavior when I'm using non-default load balancer configuration I should apply custom IAM configuration as well (via AWSIAMConfiguration
). Thanks in advance.
Environment:
- Cluster-api-provider-aws version:
v2.5.0
- Kubernetes version: (use
kubectl version
):
Client Version: v1.29.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0
- OS (e.g. from
/etc/os-release
):
ProductName: macOS
ProductVersion: 14.5
BuildVersion: 23F79
This issue is currently awaiting triage.
If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign
/lifecycle active