Kubernetes Terraform Module for Yandex.Cloud

Features

  • Create Kubernetes cluster of two types: zonal or regional
  • Create user defined Kubernetes node groups
  • Create service accounts and KMS encryption key for Kubernetes cluster
  • Easy to use in other resources via outputs

Kubernetes cluster definition

First, you need to create a VPC network with three subnets!

The Kubernetes module requires the following input variables:

  • VPC network ID
  • VPC network subnet IDs
  • Master locations: List of maps with zone names and subnet IDs for each location.
  • Node groups: List of node group maps with any number of parameters

Master locations may only have either one or three locations: one for zonal cluster and three for regional cluster.

Notes:

  • If the node group version is missing, cluster version will be used instead.
  • All node groups are able to define their own locations. These locations will be used instead of master locations.
  • If an own location was not defined for a node group with the auto scale policy, the location for this group will be automatically generated from the master location list.
  • If the node group list has more than three groups, the locations for them will be assigned from the beginning of the master location list. This means all node groups will be distributed in the range of the master location list.
  • All three master locations will be used for the fixed scale node groups.
  • When enabling OS Login for node-groups, you should have the external ip addresses for nodes (var.node_groups_defaults.nat must be true).

Node Group definition

The node_groups section defines a list of maps for each node group. You can determine any parameter for each node group, but all of them have default values. This way, an empty node group object will be created using such default values. For instance, in example 2, we define seven node groups with their own parameters. You can create any number of node groups, which is only limited by the Yandex Kubernetes service capacity. If the node_location parameter is not provided, the location will be automatically assigned from the master location list.

node_groups = {
  "yc-k8s-ng-01" = {
    description  = "Kubernetes nodes group 01"
    fixed_scale  = {
      size       = 2
    }
  },
  "yc-k8s-ng-02" = {
    description   = "Kubernetes nodes group 02"
    auto_scale    = {
      min         = 3
      max         = 5
      initial     = 3
    }
  }
}

Example Usage

module "kube" {
  source     = "./modules/kubernetes"
  network_id = "enpmff6ah2bvi0k10j66"

  master_locations   = [
    {
      zone      = "ru-central1-a"
      subnet_id = "e9b3k97pr2nh1i80as04"
    },
    {
      zone      = "ru-central1-b"
      subnet_id = "e2laaglsc7u99ur8c4j1"
    },
    {
      zone      = "ru-central1-c"
      subnet_id = "b0ckjm3olbpmk2t6c28o"
    }
  ]

  master_maintenance_windows = [
    {
      day        = "monday"
      start_time = "23:00"
      duration   = "3h"
    }
  ]

  node_groups = {
    "yc-k8s-ng-01" = {
      description  = "Kubernetes nodes group 01"
      fixed_scale   = {
        size = 3
      }
      node_labels   = {
        role        = "worker-01"
        environment = "testing"
      }
    },
    "yc-k8s-ng-02"  = {
      description   = "Kubernetes nodes group 02"
      auto_scale    = {
        min         = 2
        max         = 4
        initial     = 2
      }
      node_locations   = [
        {
          zone      = "ru-central1-b"
          subnet_id = "e2lu07tr481h35012c8p"
        }
      ]
      node_labels   = {
        role        = "worker-02"
        environment = "dev"
      }
      max_expansion   = 1
      max_unavailable = 1
    }
  }
}

Configure Terraform for Yandex Cloud

  • Install YC CLI
  • Add environment variables for terraform authentication in Yandex Cloud
export YC_TOKEN=$(yc iam create-token)
export YC_CLOUD_ID=$(yc config get cloud-id)
export YC_FOLDER_ID=$(yc config get folder-id)

Requirements

Name Version
terraform >= 1.0.0
random > 3.3
time > 0.9
yandex >= 0.108

Providers

Name Version
random 3.6.2
time 0.12.0
yandex 0.128.0

Modules

No modules.

Resources

Name Type
random_string.unique_id resource
time_sleep.wait_for_iam resource
yandex_iam_service_account.master resource
yandex_iam_service_account.node_account resource
yandex_kms_symmetric_key.kms_key resource
yandex_kms_symmetric_key_iam_binding.encrypter_decrypter resource
yandex_kms_symmetric_key_iam_binding.encrypter_decrypter_existing_sa resource
yandex_kubernetes_cluster.kube_cluster resource
yandex_kubernetes_node_group.kube_node_groups resource
yandex_resourcemanager_folder_iam_member.node_account resource
yandex_resourcemanager_folder_iam_member.sa_calico_network_policy_role resource
yandex_resourcemanager_folder_iam_member.sa_cilium_network_policy_role resource
yandex_resourcemanager_folder_iam_member.sa_logging_writer_role resource
yandex_resourcemanager_folder_iam_member.sa_node_group_loadbalancer_role_admin resource
yandex_resourcemanager_folder_iam_member.sa_node_group_public_role_admin resource
yandex_resourcemanager_folder_iam_member.sa_public_loadbalancers_role resource
yandex_vpc_security_group.k8s_main_sg resource
yandex_vpc_security_group.k8s_master_whitelist_sg resource
yandex_vpc_security_group.k8s_nodes resource
yandex_vpc_security_group_rule.egress_rules resource
yandex_vpc_security_group_rule.ingress_rules resource
yandex_vpc_security_group_rule.k8s_node_ports resource
yandex_vpc_security_group_rule.k8s_node_ssh_access_rule resource
yandex_vpc_security_group_rule.k8s_outgoing_traffic resource
yandex_client_config.client data source

Inputs

Name Description Type Default Required
allow_public_load_balancers Flag for creating new IAM role with a load-balancer.admin access. bool true no
allowed_ips List of allowed IPv4 CIDR blocks. list(string)
[
"0.0.0.0/0"
]
no
allowed_ips_ssh List of allowed IPv4 CIDR blocks for an access via SSH. list(string)
[
"0.0.0.0/0"
]
no
cluster_ipv4_range CIDR block. IP range for allocating pod addresses.
It should not overlap with any subnet in the network
the Kubernetes cluster located in. Static routes will
be set up for this CIDR blocks in node subnets.
string "172.17.0.0/16" no
cluster_ipv6_range IPv6 CIDR block. IP range for allocating pod addresses. string null no
cluster_name Name of a specific Kubernetes cluster. string "k8s-cluster" no
cluster_version Kubernetes cluster version string null no
container_runtime_type Kubernetes Node Group container runtime type string "containerd" no
create_kms Flag for enabling or disabling KMS key creation. bool true no
custom_egress_rules Map definition of custom security egress rules.

Example:
custom_egress_rules = {
"rule1" = {
protocol = "ANY"
description = "rule-1"
v4_cidr_blocks = ["10.0.1.0/24", "10.0.2.0/24"]
from_port = 8090
to_port = 8099
},
"rule2" = {
protocol = "UDP"
description = "rule-2"
v4_cidr_blocks = ["10.0.1.0/24"]
from_port = 8090
to_port = 8099
}
}
any {} no
custom_ingress_rules Map definition of custom security ingress rules.

Example:
custom_ingress_rules = {
"rule1" = {
protocol = "TCP"
description = "rule-1"
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 3000
to_port = 32767
},
"rule2" = {
protocol = "TCP"
description = "rule-2"
v4_cidr_blocks = ["0.0.0.0/0"]
port = 443
},
"rule3" = {
protocol = "TCP"
description = "rule-3"
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
}
any {} no
custom_metadata Adding custom metadata to node-groups.
Example:
custom_metadata = {
foo = "bar"
}
map(any) {} no
description Description of the Kubernetes cluster. string "Yandex Managed K8S cluster" no
enable_cilium_policy Flag for enabling or disabling Cilium CNI. bool false no
enable_default_rules Manages creation of default security rules.

Default security rules:
- Allow all incoming traffic from any protocol.
- Allows master-to-node and node-to-node communication inside a security group.
- Allows pod-to-pod and service-to-service communication.
- Allows debugging ICMP packets from internal subnets.
- Allow access to Kubernetes API via port 6443 from the subnet.
- Allow access to Kubernetes API via port 443 from the subnet.
bool true no
enable_node_ports_rules Enables creation of NodePort port range rule.

"rule-1" = {
protocol = "TCP"
description = "Rule allows incoming traffic from the Internet to the NodePort port range. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
bool true no
enable_node_ssh_access Enables creation of node ssh access rule.

ingress {
protocol = "TCP"
description = "Allow access to worker nodes via SSH from IP's."
v4_cidr_blocks = var.allowed_ips_ssh
port = 22
}
bool true no
enable_oslogin_or_ssh_keys Enabling OS Login or adding ssh-keys to metadata of node-groups. map(any)
{
"enable-oslogin": "false",
"ssh-keys": null
}
no
enable_outgoing_traffic Enables all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, and so on..

"rule-1" = {
protocol = "ANY"
description = "Rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Yandex Object Storage, Docker Hub, and so on."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
bool true no
folder_id The ID of the folder that the Kubernetes cluster belongs to. string null no
kms_key KMS symmetric key parameters. any {} no
master_auto_upgrade Boolean flag that specifies if master can be upgraded automatically. bool true no
master_labels Set of key/value label pairs to assign Kubernetes master nodes. map(string) {} no
master_locations List of locations where the cluster will be created. If the list contains only one
location, a zonal cluster will be created; if there are three locations, this will create a regional cluster.

Note: The master locations list may only have ONE or THREE locations.
list(object({
zone = string
subnet_id = string
}))
n/a yes
master_logging (Optional) Master logging options.
object({
enabled = optional(bool, true)
folder_id = optional(string, null)
enabled_kube_apiserver = optional(bool, true)
enabled_autoscaler = optional(bool, true)
enabled_events = optional(bool, true)
enabled_audit = optional(bool, true)
log_group_id = optional(string, null)
})
{} no
master_maintenance_windows List of structures that specifies maintenance windows,
when auto update for the master is allowed.

Example:
master_maintenance_windows = [
{
day = "monday"
start_time = "23:00"
duration = "3h"
}
]
list(map(string)) [] no
master_service_account_id Existing service account ID for control plane. string null no
network_acceleration_type Network acceleration type for the Kubernetes node group string "standard" no
network_id The ID of the cluster network. string n/a yes
network_policy_provider Network policy provider for Kubernetes cluster string "CALICO" no
node_account_name IAM node account name. string "k8s-node-account" no
node_groups Kubernetes node groups map of maps. It could contain all parameters of yandex_kubernetes_node_group resource,
many of them could be NULL and have default values.

Notes:
- If node groups version isn't defined, cluster version will be used instead of.
- A master locations list must have only one location for zonal cluster and three locations for a regional.
- All node groups are able to define own locations. These locations will be used at first.
- If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations.
- Master locations will be used for fixed scale node groups.
- Auto repair and upgrade values will be used master_auto_upgrade value.
- Master maintenance windows will be used for Node groups also!
- Only one max_expansion OR max_unavailable values should be specified for the deployment policy.

Documentation - https://registry.terraform.io/providers/yandex-cloud/yandex/latest/docs/resources/kubernetes_node_group

Default values:
platform_id     = "standard-v3"
node_cores = 4
node_memory = 8
node_gpus = 0
core_fraction = 100
disk_type = "network-ssd"
disk_size = 64
preemptible = false
nat = false
auto_repair = true
auto_upgrade = true
maintenance_day = "monday"
maintenance_start_time = "20:00"
maintenance_duration = "3h30m"
network_acceleration_type = "standard"
container_runtime_type = "containerd"
Example:
node_groups = {
"yc-k8s-ng-01" = {
cluster_name = "k8s-kube-cluster"
description = "Kubernetes nodes group with fixed scale policy and one maintenance window"
fixed_scale = {
size = 3
}
labels = {
owner = "yandex"
service = "kubernetes"
}
node_labels = {
role = "worker-01"
environment = "dev"
}
},
"yc-k8s-ng-02" = {
description = "Kubernetes nodes group with auto scale policy"
auto_scale = {
min = 2
max = 4
initial = 2
}
node_locations = [
{
zone = "ru-central1-b"
subnet_id = "e2lu07tr481h35012c8p"
}
]
labels = {
owner = "example"
service = "kubernetes"
}
node_labels = {
role = "worker-02"
environment = "testing"
}
instance_labels = {
managed_by = "terraform"
environment = "stage"
}
},
"yc-k8s-ng-03" = {
description = "Kubernetes nodes group with GPU"
fixed_scale = {
size = 1
}
platform_id = "gpu-standard-v2"
node_gpus = 2
node_gpu_settings = {
gpu_environment = "runc_drivers_cuda"
}
node_locations = [
{
zone = "ru-central1-b"
subnet_id = "e2lu07tr481h35012c8p"
}
]
labels = {
owner = "example"
service = "kubernetes"
}
node_labels = {
role = "worker-03"
environment = "gpu"
}
node_taints = [
"nvidia.com/gpu=:NoSchedule"
]
}
}
any {} no
node_groups_defaults Map of common default values for Node groups. map(any)
{
"core_fraction": 100,
"disk_size": 64,
"disk_type": "network-ssd",
"ipv4": true,
"ipv6": false,
"nat": false,
"node_cores": 4,
"node_gpus": 0,
"node_memory": 8,
"platform_id": "standard-v3",
"preemptible": false
}
no
node_ipv4_cidr_mask_size (Optional) Size of the masks that are assigned to each node in the cluster.
This efficiently limits the maximum number of pods for each node.
number 24 no
node_service_account_id Existing service account ID for worker nodes. string null no
public_access Public or private Kubernetes cluster bool true no
release_channel Kubernetes cluster release channel name string "REGULAR" no
security_groups_ids_list List of security group IDs to which the Kubernetes cluster belongs list(string) [] no
service_account_name IAM service account name. string "k8s-service-account" no
service_ipv4_range CIDR block. IP range from which Kubernetes service cluster IP addresses
will be allocated from. It should not overlap with
any subnet in the network the Kubernetes cluster located in
string "172.18.0.0/16" no
service_ipv6_range IPv6 CIDR block. IP range for allocating pod addresses. string null no
timeouts Timeouts. map(string)
{
"create": "60m",
"delete": "60m",
"update": "60m"
}
no
use_existing_sa Use existing service accounts for control plane and worker nodes or not.
If true parameters master_service_account_id and node_service_account_id must be set.
bool false no

Outputs

Name Description
cluster_ca_certificate Kubernetes cluster certificate.
cluster_id Kubernetes cluster ID.
cluster_name Kubernetes cluster name.
external_cluster_cmd Kubernetes cluster public IP address.
Use the following command to download kube config and start working with Yandex Managed Kubernetes cluster:
$ yc managed-kubernetes cluster get-credentials --id <cluster_id> --external
This command will automatically add kube config for your user; after that, you will be able to test it with the
kubectl get cluster-info command.
external_v4_address Kubernetes cluster external IP address.
external_v4_endpoint Kubernetes cluster external URL.
internal_cluster_cmd Kubernetes cluster private IP address.
Use the following command to download kube config and start working with Yandex Managed Kubernetes cluster:
$ yc managed-kubernetes cluster get-credentials --id <cluster_id> --internal
Note: Kubernetes internal cluster nodes are available from the virtual machines in the same VPC as cluster nodes.
internal_v4_address Kubernetes cluster internal IP address.
Note: Kubernetes internal cluster nodes are available from the virtual machines in the same VPC as cluster nodes.
internal_v4_endpoint Kubernetes cluster internal URL.
Note: Kubernetes internal cluster nodes are available from the virtual machines in the same VPC as cluster nodes.
node_account_id Created IAM node account ID.
node_account_name Created IAM node account name.
service_account_id Created IAM service account ID.
service_account_name Created IAM service account name.