/terraform-aws-eks

A Terraform module to create an Elastic Kubernetes (EKS) cluster and associated worker instances on AWS.

Primary LanguageHCLMIT LicenseMIT

terraform-aws-eks

A terraform module to create a managed Kubernetes cluster on AWS EKS. Available through the Terraform registry. Inspired by and adapted from this doc and its source code. Read the AWS docs on EKS to get connected to the k8s dashboard.

Branch Build status
master build Status

Assumptions

  • You want to create an EKS cluster and an autoscaling group of workers for the cluster.
  • You want these resources to exist within security groups that allow communication and coordination. These can be user provided or created within the module.
  • You've created a Virtual Private Cloud (VPC) and subnets where you intend to put the EKS resources.
  • If manage_aws_auth = true, it's required that both kubectl (>=1.10) and aws-iam-authenticator are installed and on your shell's PATH.

Usage example

A full example leveraging other community modules is contained in the examples/eks_test_fixture directory. Here's the gist of using it via the Terraform registry:

module "my-cluster" {
  source       = "terraform-aws-modules/eks/aws"
  cluster_name = "my-cluster"
  subnets      = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
  vpc_id       = "vpc-1234556abcdef"

  worker_groups = [
    {
      instance_type = "m4.large"
      asg_max_size  = 5
    }
  ]

  tags = {
    environment = "test"
  }
}

Other documentation

Release schedule

Generally the maintainers will try to release the module once every 2 weeks to keep up with PR additions. If particularly pressing changes are added or maintainers come up with the spare time (hah!), release may happen more often on occasion.

Testing

This module has been packaged with awspec tests through kitchen and kitchen-terraform. To run them:

  1. Install rvm and the ruby version specified in the Gemfile.

  2. Install bundler and the gems from our Gemfile:

    gem install bundler && bundle install
  3. Ensure your AWS environment is configured (i.e. credentials and region) for test.

  4. Test using bundle exec kitchen test from the root of the repo.

For now, connectivity to the kubernetes cluster is not tested but will be in the future. Once the test fixture has converged, you can query the test cluster from that terminal session with

kubectl get nodes --watch --kubeconfig kubeconfig

(using default settings config_output_path = "./" & write_kubeconfig = true)

Doc generation

Code formatting and documentation for variables and outputs is generated using pre-commit-terraform hooks which uses terraform-docs.

Follow these instructions to install pre-commit locally.

And install terraform-docs with go get github.com/segmentio/terraform-docs or brew install terraform-docs.

Contributing

Report issues/questions/feature requests on in the issues section.

Full contributing guidelines are covered here.

IAM Permissions

Testing and using this repo requires a minimum set of IAM permissions. Test permissions are listed in the eks_test_fixture README.

Change log

The changelog captures all important release notes.

Authors

Created and maintained by Brandon O'Connor - brandon@atscale.run. Many thanks to the contributors listed here!

License

MIT Licensed. See LICENSE for full details.

Inputs

Name Description Type Default Required
cluster_create_security_group Whether to create a security group for the cluster or attach the cluster to cluster_security_group_id. string true no
cluster_create_timeout Timeout value when creating the EKS cluster. string 15m no
cluster_delete_timeout Timeout value when deleting the EKS cluster. string 15m no
cluster_name Name of the EKS cluster. Also used as a prefix in names of related resources. string - yes
cluster_security_group_id If provided, the EKS cluster will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the workers and provide API access to your current IP/32. string `` no
cluster_version Kubernetes version to use for the EKS cluster. string 1.11 no
config_output_path Where to save the Kubectl config file (if write_kubeconfig = true). Should end in a forward slash / . string ./ no
kubeconfig_aws_authenticator_additional_args Any additional arguments to pass to the authenticator such as the role to assume. e.g. ["-r", "MyEksRole"]. list [] no
kubeconfig_aws_authenticator_command Command to use to to fetch AWS EKS credentials. string aws-iam-authenticator no
kubeconfig_aws_authenticator_command_args Default arguments passed to the authenticator command. Defaults to [token -i $cluster_name]. list [] no
kubeconfig_aws_authenticator_env_variables Environment variables that should be used when executing the authenticator. e.g. { AWS_PROFILE = "eks"}. map {} no
kubeconfig_name Override the default name used for items kubeconfig. string `` no
local_exec_interpreter Command to run for local-exec resources. Must be a shell-style interpreter. If you are on Windows Git Bash is a good choice. list [ "/bin/sh", "-c" ] no
manage_aws_auth Whether to write and apply the aws-auth configmap file. string true no
map_accounts Additional AWS account numbers to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. list [] no
map_accounts_count The count of accounts in the map_accounts list. string 0 no
map_roles Additional IAM roles to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. list [] no
map_roles_count The count of roles in the map_roles list. string 0 no
map_users Additional IAM users to add to the aws-auth configmap. See examples/eks_test_fixture/variables.tf for example format. list [] no
map_users_count The count of roles in the map_users list. string 0 no
subnets A list of subnets to place the EKS cluster and workers within. list - yes
tags A map of tags to add to all resources. map {} no
vpc_id VPC where the cluster and workers will be deployed. string - yes
worker_additional_security_group_ids A list of additional security group ids to attach to worker instances list [] no
worker_create_security_group Whether to create a security group for the workers or attach the workers to worker_security_group_id. string true no
worker_group_count The number of maps contained within the worker_groups list. string 1 no
worker_groups A list of maps defining worker group configurations. See workers_group_defaults for valid keys. list [ { "name": "default" } ] no
worker_security_group_id If provided, all workers will be attached to this security group. If not given, a security group will be created with necessary ingres/egress to work with the EKS cluster. string `` no
worker_sg_ingress_from_port Minimum port number from which pods will accept communication. Must be changed to a lower value if some pods in your cluster will expose a port lower than 1025 (e.g. 22, 80, or 443). string 1025 no
workers_group_defaults Override default values for target groups. See workers_group_defaults_defaults in locals.tf for valid keys. map {} no
write_kubeconfig Whether to write a Kubectl config file containing the cluster configuration. Saved to config_output_path. string true no

Outputs

Name Description
cluster_certificate_authority_data Nested attribute containing certificate-authority-data for your cluster. This is the base64 encoded certificate data required to communicate with your cluster.
cluster_endpoint The endpoint for your EKS Kubernetes API.
cluster_id The name/id of the EKS cluster.
cluster_security_group_id Security group ID attached to the EKS cluster.
cluster_version The Kubernetes server version for the EKS cluster.
config_map_aws_auth A kubernetes configuration to authenticate to this EKS cluster.
kubeconfig kubectl config file contents for this EKS cluster.
worker_iam_role_arn default IAM role ARN for EKS worker groups
worker_iam_role_name default IAM role name for EKS worker groups
worker_security_group_id Security group ID attached to the EKS workers.
workers_asg_arns IDs of the autoscaling groups containing workers.
workers_asg_names Names of the autoscaling groups containing workers.