Failed to Create an AccessEntry for Karpenter Node Role
hflobao opened this issue · 1 comments
Describe the bug
When creating an AccessEntry to be used by nodes provisioned by Karpenter autoscaler, the ACK resource keeps in this state:
status:
ackResourceMetadata:
arn: arn:aws:eks:us-east-1:012345678910:access-entry/cluster-eks-nprd-pnu0001324-s7/role/759021108710/karpenter-node-role-cluster-eks-nprd-pnu0001324-s7/f8c8bdf0-4831-b76a-5b34-b465b3bd4b6d
ownerAccountID: "012345678910"
region: us-east-1
conditions:
- message: |-
InvalidParameterException: This operation can only be performed on Access Entries with a type of "STANDARD".
{
RespMetadata: {
StatusCode: 400,
RequestID: "db318416-8193-4f94-b553-0180a150b89e"
},
Message_: "This operation can only be performed on Access Entries with a type of \"STANDARD\"."
}
status: "True"
type: ACK.Recoverable
- lastTransitionTime: "2024-08-23T12:15:53Z"
message: Unable to determine if desired resource state matches latest observed
state
reason: |-
InvalidParameterException: This operation can only be performed on Access Entries with a type of "STANDARD".
{
RespMetadata: {
StatusCode: 400,
RequestID: "db318416-8193-4f94-b553-0180a150b89e"
},
Message_: "This operation can only be performed on Access Entries with a type of \"STANDARD\"."
}
status: Unknown
type: ACK.ResourceSynced
createdAt: "2024-08-22T20:56:40Z"
modifiedAt: "2024-08-22T20:56:40Z"
Checking in CloudTrail, its possible to see that the AccessEntry was created, but the ack-eks-controller is trying to update it, which is not allowed:
Successful creation:
{
"eventVersion": "1.09",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROA3BOJO7XTFT6XXXXXX:eks-cluster-ek-ack-eks-co-a0e5ca17-7d10-4d01-995c-275765e9c280",
"arn": "arn:aws:sts::012345678910:assumed-role/ACKRole-cluster-eks-nprd-pnu0001324-s7/eks-cluster-ek-ack-eks-co-a0e5ca17-7d10-4d01-995c-275765e9c280",
"accountId": "012345678910",
"accessKeyId": "ASIA3BOJO7XTCTXXXXXX",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AROA3BOJO7XTFT6XXXXXX",
"arn": "arn:aws:iam::012345678910:role/ACKRole-cluster-eks-nprd-pnu0001324-s7",
"accountId": "012345678910",
"userName": "ACKRole-cluster-eks-nprd-pnu0001324-s7"
},
"attributes": {
"creationDate": "2024-08-22T20:46:10Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2024-08-22T20:56:40Z",
"eventSource": "eks.amazonaws.com",
"eventName": "CreateAccessEntry",
"awsRegion": "us-east-1",
"sourceIPAddress": "98.80.131.36",
"userAgent": "aws-controllers-k8s/eks.services.k8s.aws-1.4.4 (GitCommit/1afb0be525003c884306f73102991dd2557c1e3d; BuildDate/2024-08-07T04:31; CRDKind/AccessEntry; CRDVersion/v1alpha1) aws-sdk-go/1.49.13 (go1.22.5; linux; amd64)",
"requestParameters": {
"clientRequestToken": "AE2AC76E-E7A3-438F-967A-E75E6BAXXXXX",
"name": "cluster-eks-nprd-pnu0001324-s7",
"type": "EC2_LINUX",
"principalArn": "arn:aws:iam::012345678910:role/karpenter-node-role-cluster-eks-nprd-pnu0001324-s7",
"tags": {
"services.k8s.aws/controller-version": "eks-1.4.4",
"services.k8s.aws/namespace": "karpenter"
}
},
"responseElements": {
"accessEntry": {
"clusterName": "cluster-eks-nprd-pnu0001324-s7",
"principalArn": "arn:aws:iam::012345678910:role/karpenter-node-role-cluster-eks-nprd-pnu0001324-s7",
"kubernetesGroups": [
"system:nodes"
],
"accessEntryArn": "arn:aws:eks:us-east-1:012345678910:access-entry/cluster-eks-nprd-pnu0001324-s7/role/759021108710/karpenter-node-role-cluster-eks-nprd-pnu0001324-s7/f8c8bdf0-4831-b76a-5b34-b465b3bd4b6d",
"createdAt": 1724360200.593,
"modifiedAt": 1724360200.593,
"tags": {
"services.k8s.aws/controller-version": "eks-1.4.4",
"services.k8s.aws/namespace": "karpenter"
},
"username": "system:node:{{EC2PrivateDNSName}}",
"type": "EC2_LINUX"
}
},
"requestID": "78bfea3f-23f1-4367-93ed-51b57942e769",
"eventID": "e2e0e3e3-83c3-45ca-9d82-85362e548463",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "012345678910",
"eventCategory": "Management"
}
Failed update tentative by ack:
{
"eventVersion": "1.09",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROA3BOJO7XTFT6XXXXXX:eks-cluster-ek-ack-eks-co-804545b6-3848-48b5-a0c9-ed128a497a83",
"arn": "arn:aws:sts::012345678910:assumed-role/ACKRole-cluster-eks-nprd-pnu0001324-s7/eks-cluster-ek-ack-eks-co-804545b6-3848-48b5-a0c9-ed128a497a83",
"accountId": "012345678910",
"accessKeyId": "ASIA3BOJO7XTPZXXXXXX",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AROA3BOJO7XTFT6XXXXXX",
"arn": "arn:aws:iam::012345678910:role/ACKRole-cluster-eks-nprd-pnu0001324-s7",
"accountId": "012345678910",
"userName": "ACKRole-cluster-eks-nprd-pnu0001324-s7"
},
"attributes": {
"creationDate": "2024-08-23T08:49:45Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2024-08-23T12:32:34Z",
"eventSource": "eks.amazonaws.com",
"eventName": "UpdateAccessEntry",
"awsRegion": "us-east-1",
"sourceIPAddress": "98.80.131.36",
"userAgent": "aws-controllers-k8s/eks.services.k8s.aws-1.4.4 (GitCommit/1afb0be525003c884306f73102991dd2557c1e3d; BuildDate/2024-08-07T04:31; CRDKind/AccessEntry; CRDVersion/v1alpha1) aws-sdk-go/1.49.13 (go1.22.5; linux; amd64)",
"errorCode": "InvalidParameterException",
"requestParameters": {
"clientRequestToken": "B35794FF-2AF3-4F93-AB5B-96EB79DXXXXX",
"name": "cluster-eks-nprd-pnu0001324-s7",
"principalArn": "arn%3Aaws%3Aiam%3A%3A759021108710%3Arole%2Fkarpenter-node-role-cluster-eks-nprd-pnu0001324-s7"
},
"responseElements": {
"message": "This operation can only be performed on Access Entries with a type of \"STANDARD\"."
},
"requestID": "11a808c0-d7ae-474c-ae09-1bc8ac907793",
"eventID": "1c7df672-d6c4-491e-bdbf-cb26dd32e414",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "012345678910",
"eventCategory": "Management"
}
The YAML definition for the AccessEntry hasn't being changed and is as follows:
apiVersion: eks.services.k8s.aws/v1alpha1
kind: AccessEntry
metadata:
name: eks-${clusterNameShort}-${cluster_env}-karpenter-ae
namespace: karpenter
spec:
clusterName: ${clusterName}
type: EC2_LINUX
principalARN: arn:aws:iam::${cluster_account}:role/karpenter-node-role-${clusterName}
Its getting deployed in the cluster by FluxCD using Kustomization.
Steps to reproduce
Just deploy the resource in the cluster. It never turns to the True SYNCED state.
$ k get accessentry -A
NAMESPACE NAME CLUSTER TYPE USERNAME SYNCED AGE
karpenter eks-sprint6-staging-karpenter-ae cluster-eks-nprd-pnu0001324-s7 EC2_LINUX Unknown 39m
Expected outcome
The resource with a True SYNCED state in the cluster.
Environment
eks-controller version: 1.4.4
-
Kubernetes version
1.30 -
Using EKS (yes/no), if so version?
Yes.
eks.6 -
AWS service targeted (S3, RDS, etc.)
EKS (AccessEntry)