Implementation of EKS setup using Terraform
and Cloudformation
. Fully functional templates to deploy your VPC
and Kubernetes clusters
together with all the essential tags and addons. Also, worker nodes are part of AutoScallingGroup which consists of spot and on-demand instances.
Templates support deployment to different AWS partitions. I have tested it with public
and china
partitions. I am actively using this configuration to run EKS setup in Ireland(eu-west-1), North Virginia(us-east-1) and Beijing(cn-north-1).
Latest configuration templates used by me can be found in terraform-aws for aws provider and terraform-k8s for kubernetes provider. Once you configure your environment variables in ./terraform-aws/vars
./terraform-k8s/vars
, you can use makefile commands to run your deployments. Resources that will be created after applying templates:
You will find latest setup of following components:
- VPC with public/private subnets, enabled flow logs and VPC endpoints for ECR and S3
- EKS controlplane
- EKS worker nodes in private subnets (spot and ondemnd instances based on variables)
- Option to used Managed Node Groups
- Dynamic basion host
- Automatically configure aws-auth configmap for worker nodes to join the cluster
- OpenID Connect provider which can be used to assign IAM roles to service accounts in k8s
- NodeDrainer lambda which will drain worker nodes during rollingUpdate of the nodes (This is only applicable to spot worker nodes, managed node groups do not require this lambda). Node drainer lambda is maintained in https://github.com/marcincuber/tf-k8s-node-drainer
- IAM Roles for service accounts such as aws-node, cluster-autoscaler, alb-ingress-controller, external-secrets (Role arns are used when you deploy kubernetes addons with Service Accounts that make use of OIDC provider)
- For spot termination handling use aws-node-termination-handler from k8s_templates/aws-node-termination-handler.
- EKS cluster add-ons (CoreDNS + kube-proxy)
All the templates for additional deployments/daemonsets can be found in k8s_templates.
To apply templates simply run kubectl apply -f .
from a desired folder. Ensure to put in correct Role arn in service accounts configuration. Also, check that environment variables are correct.
You will find templates for the following Kubernetes components:
- ALB ingress controller
- AWS Load Balancer controller
- AWS node termination handler
- Calico
- Cert Manager
- Cluster Autoscaler
- CoreDns
- Dashboard
- External-DNS
- External Secrets
- Kube Proxy
- Kube2iam
- Metrics server
- NewRelic
- Reloader
- Spot Interrupt Handler
- VPC CNI Plugin
- Secrets CSI Driver
Check out my stories on medium if you interested in finding out more on specific topics.
Amazon EKS upgrade journey from 1.20 to 1.21
Amazon EKS upgrade journey from 1.19 to 1.20
Amazon EKS upgrade journey from 1.18 to 1.19
Amazon EKS upgrade journey from 1.17 to 1.18
Amazon EKS upgrade journey from 1.16 to 1.17
Amazon EKS upgrade journey from 1.15 to 1.16
Kube-bench implementation with EKS
More about my configuration can be found in the blog post I have written recently -> EKS design
Amazon EKS- RBAC with IAM access
Using OIDC provider to allow service accounts to assume IAM role
More about kube2iam configuration can be found in the blog post I have written recently -> EKS and kube2iam
Amazon EKS, setup external DNS with OIDC provider and kube2iam
Amazon EKS + managed node groups
Terraform module written by me can be found in -> https://registry.terraform.io/modules/umotif-public/eks-node-group
Kubernetes GitLab Runners on Amazon EKS
EKS platforms information Worker nodes upgrades
On user's machine who has been added to EKS, they can configure .kube/config file using the following command:
$ aws eks list-clusters
$ aws eks update-kubeconfig --name ${cluster_name}