K8TRE AWS base infrastructure

Lint

Deploy AWS infrastructure using Terraform to support K8TRE.

First time

You must first create a S3 bucket to store the Terraform state file. Activate your AWS credentials in your shell environment, edit the resource.aws_s3_bucket.bucket bucket name in bootstrap/backend.tf, then:

cd backend
terraform init
terraform apply
cd ..

Deploy Amazon Elastic Kubernetes Service (EKS)

By default this 3will deploy two EKS clusters:

  • k8tre-dev-argocd is where ArgoCD will run
  • k8tre-dev is where K8TRE will be deployed

IAM roles and pod identities are setup to allow ArgoCD running in the k8tre-dev-argocd cluster to have admin access to the k8tre-dev cluster.

Configuration

Edit main.tf. You must modify terraform.backend.s3 bucket to match the one in bootstrap/backend.tf, and you may want to modify the configuration of module.k8tre-eks.

If you want to deploy ArgoCD in the same cluster as K8TRE delete

  • module.k8tre-argocd-eks
  • output.kubeconfig_command_k8tre-argocd-dev

Run Terraform

Activate your AWS credentials in your shell environment, then:

terraform init
terraform apply

If there's a timeout run

terraform apply

again.

Kubernetes access

terraform apply should display the command to create a kubeconfig file for the k8tre-dev and k8tre-dev-argocd clusters.

Install K8TRE prerequisites and ArgoCD

The apps directory will install some Kubernetes prerequisites for K8TRE, as well as setting up ArgoCD. If you prefer you can set everything up manually following the K8TRE documentation.

Edit apps/variables.tf:

  • Modify terraform.backend.s3 bucket to match the one in bootstrap/backend.tf.
  • Change the data.terraform_remote_state.k8tre section to match the backend.s3 section in main.tf. This allows the ArgoCD terraform to automatically lookup up the EKS details without needing to specify everything manually.
  • By default this will also install the K8TRE ArgoCD root-app-of-apps. Set install_k8tre = false to disable this.

Things to note

EKS is deployed in a private subnet, with NAT gateway to a public subnet A GitHub OIDC role can optionally be created.

The cluster has a single EKS node group in a single subnet (single availability zone) to reduce costs, and to avoid multi-AZ storage. If you require multi-AZ high-availability you will need to modify this.

A prefix list ${var.cluster_name}-service-access-cidrs is provided for convenience This is not used in any Terraform resource, but can be referenced in Application load balancers deployed in EKS

Optional wildcard certificate (not currently automated)

To simplify certificate management in K8TRE you can optionally create a wildcard public certificate using Amazon Certificate Manager. This certificate can then be used in AWS load balancers provisioned by K8TRE without further configuration.

Developer notes

To debug Argocd inter-cluster auth:

kubectl -nargocd exec -it deploy/argocd-server -- bash

argocd-k8s-auth aws --cluster-name k8tre-dev --role-arn arn:aws:iam::${ACCOUNT_ID}:role/k8tre-dev-eks-access

Linting

When making changes to this repository run:

terraform validate
terraform fmt -recursive
tflint --recursive
npx prettier@3.6.2 --write '**/*.{yaml,yml,md}'