terraform-aws-eks-jx version v.1.19.0 produces cluster where nodes not accessible
tgelpi-bot opened this issue · 2 comments
Building EKS environment using the latest v.1.19.0 produces a cluster that fails to list nodes when running the command 'kubectl get nodes
'. Was able to produced an accessible environment when I reverted back to previous version v.1.18.11.
The latest change looks like a Terraform AWS Provider Version 4 Upgrade . Using a forked version of v1.19.0 I tried to make adjustments to versions to have a required range of both version 3 and 4 but was unsuccessful. Reviewing Terraform AWS Provider Version 4 Upgrade Guide. I'm not sure which provider version needs to be set.
I am trying to determine what is missing to grant access to the nodes. Comparing the output of the command aws eks describe-cluster --name myclust
for both 18 and 19 versions produced similar results, in particular, for public and private access:
{
"cluster": {
"name": "myclust",
"arn": "arn:aws:eks:us-west-2:xxx:cluster/myclust",
"createdAt": "2022-08-10T13:09:25.125000-04:00",
"version": "1.21",
"endpoint": "https://87F6F8AF22DBC5B3D94C632A16DED69F.gr7.us-west-2.eks.amazonaws.com",
"roleArn": "arn:aws:iam::xxx:role/myclust20220810170858576400000005",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-00404a276c7ed82ad",
"subnet-0d6f77a97996fb9b2",
"subnet-080293f8c1138041b"
],
"securityGroupIds": [
"sg-06bc0c4092e7b74c3"
],
"clusterSecurityGroupId": "sg-074c556b990641bcf",
"vpcId": "vpc-002110ee6bf70ef9b",
"endpointPublicAccess": true,
"endpointPrivateAccess": false,
"publicAccessCidrs": [
"0.0.0.0/0"
]
},
"kubernetesNetworkConfig": {
"serviceIpv4Cidr": "172.20.0.0/16",
"ipFamily": "ipv4"
},
"logging": {
"clusterLogging": [
{
"types": [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
],
"enabled": false
}
]
},
"identity": {
"oidc": {
"issuer": "https://oidc.eks.us-west-2.amazonaws.com/id/87F6F8AF22DBC5B3D94C632A16DED69F"
}
},
"status": "ACTIVE",
"certificateAuthority": {
"data": "xxx"
},
"platformVersion": "eks.10",
"tags": {}
}
}
Another note, the following were the variables set for both versions:
jx_git_url = "https://github.com/jx3rocks/jx3-eks-vault.src.git"
force_destroy = true
cluster_name = "myclust"
profile = "myprofile"
subdomain = "xxx"
nginx_chart_version = "3.12.0"
install_kuberhealthy = false
use_vault = true
region = "us-west-2"
cluster_version = "1.21"
node_machine_type = "m5.large"