Terraform module easy management of EKS clusters on AWS.
-
Have AWS access
-
Apply terraform config - create cluster (usually slow, i.e. 10+ mins.)
-
Set your kubeconfig using the aws cli:
aws eks update-kubeconfig --name <cluster-name> # e.g example-cluster
-
Confirm connection towards the cluster:
kubectl get nodes # should return `no resources`
When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration.
I.e if your cluster is created by a machine user role (e.g. as a part of a CI/CD task), you will need to assume this role to establish initial connection towards the cluster.
More info here.
-
Save and apply
config-map-aws-auth
output from terraform:terraform output config_map_aws_auth # save as auth-config.yml kubectl apply -f auth-config.yml
-
Confirm that nodes have joined/are joining the cluster
kubectl get nodes # should show a list of nodes
- Cluster access requires an authenticated shell towards AWS in addition to the kubeconfig being present.
- E.g: make sure that
vaulted
:- is working
- session hasn't timed out
- the correct AWS role is in use
- E.g: make sure that
Terraform module which creates a EKS cluster on AWS.
Currently maintained by these contributors.
MIT License. See LICENSE for full details.