Deploy AWS infrastructure using Terraform to support K8TRE.
You must first create a S3 bucket to store the Terraform state file.
Activate your AWS credentials in your shell environment, edit the resource.aws_s3_bucket.bucket bucket name in bootstrap/backend.tf, then:
cd backend
terraform init
terraform apply
cd ..By default this 3will deploy two EKS clusters:
k8tre-dev-argocdis where ArgoCD will runk8tre-devis where K8TRE will be deployed
IAM roles and pod identities are setup to allow ArgoCD running in the k8tre-dev-argocd cluster to have admin access to the k8tre-dev cluster.
Edit main.tf.
You must modify terraform.backend.s3 bucket to match the one in bootstrap/backend.tf, and you may want to modify the configuration of module.k8tre-eks.
If you want to deploy ArgoCD in the same cluster as K8TRE delete
module.k8tre-argocd-eksoutput.kubeconfig_command_k8tre-argocd-dev
Activate your AWS credentials in your shell environment, then:
terraform init
terraform applyIf there's a timeout run
terraform applyagain.
terraform apply should display the command to create a kubeconfig file for the k8tre-dev and k8tre-dev-argocd clusters.
The apps directory will install some Kubernetes prerequisites for K8TRE, as well as setting up ArgoCD.
If you prefer you can set everything up manually following the K8TRE documentation.
Edit apps/variables.tf:
- Modify
terraform.backend.s3bucketto match the one inbootstrap/backend.tf. - Change the
data.terraform_remote_state.k8tresection to match thebackend.s3section inmain.tf. This allows the ArgoCD terraform to automatically lookup up the EKS details without needing to specify everything manually. - By default this will also install the K8TRE ArgoCD root-app-of-apps.
Set
install_k8tre = falseto disable this.
EKS is deployed in a private subnet, with NAT gateway to a public subnet A GitHub OIDC role can optionally be created.
The cluster has a single EKS node group in a single subnet (single availability zone) to reduce costs, and to avoid multi-AZ storage. If you require multi-AZ high-availability you will need to modify this.
A prefix list ${var.cluster_name}-service-access-cidrs is provided for convenience
This is not used in any Terraform resource, but can be referenced in Application load balancers deployed in EKS
To simplify certificate management in K8TRE you can optionally create a wildcard public certificate using Amazon Certificate Manager. This certificate can then be used in AWS load balancers provisioned by K8TRE without further configuration.
To debug Argocd inter-cluster auth:
kubectl -nargocd exec -it deploy/argocd-server -- bash
argocd-k8s-auth aws --cluster-name k8tre-dev --role-arn arn:aws:iam::${ACCOUNT_ID}:role/k8tre-dev-eks-accessWhen making changes to this repository run:
terraform validate
terraform fmt -recursive
tflint --recursive
npx prettier@3.6.2 --write '**/*.{yaml,yml,md}'