This Terraform module deploys a Kubernetes cluster on Azure using AKS (Azure Kubernetes Service) and adds support for monitoring with Log Analytics.
-> NOTE: If you have not assigned client_id
or client_secret
, A SystemAssigned
identity will be created.
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "aks-resource-group"
location = "eastus"
}
module "network" {
source = "Azure/network/azurerm"
resource_group_name = azurerm_resource_group.example.name
address_space = "10.52.0.0/16"
subnet_prefixes = ["10.52.0.0/24"]
subnet_names = ["subnet1"]
depends_on = [azurerm_resource_group.example]
}
data "azuread_group" "aks_cluster_admins" {
display_name = "AKS-cluster-admins"
}
module "aks" {
source = "Azure/aks/azurerm"
resource_group_name = azurerm_resource_group.example.name
client_id = "your-service-principal-client-appid"
client_secret = "your-service-principal-client-password"
kubernetes_version = "1.23.5"
orchestrator_version = "1.23.5"
prefix = "prefix"
cluster_name = "cluster-name"
network_plugin = "azure"
vnet_subnet_id = module.network.vnet_subnets[0]
os_disk_size_gb = 50
sku_tier = "Paid" # defaults to Free
enable_role_based_access_control = true
rbac_aad_admin_group_object_ids = [data.azuread_group.aks_cluster_admins.id]
rbac_aad_managed = true
private_cluster_enabled = true # default value
enable_http_application_routing = true
enable_azure_policy = true
enable_auto_scaling = true
enable_host_encryption = true
agents_min_count = 1
agents_max_count = 2
agents_count = null # Please set `agents_count` `null` while `enable_auto_scaling` is `true` to avoid possible `agents_count` changes.
agents_max_pods = 100
agents_pool_name = "exnodepool"
agents_availability_zones = ["1", "2"]
agents_type = "VirtualMachineScaleSets"
agents_labels = {
"nodepool" : "defaultnodepool"
}
agents_tags = {
"Agent" : "defaultnodepoolagent"
}
enable_ingress_application_gateway = true
ingress_application_gateway_name = "aks-agw"
ingress_application_gateway_subnet_cidr = "10.52.1.0/24"
network_policy = "azure"
net_profile_dns_service_ip = "10.0.0.10"
net_profile_docker_bridge_cidr = "170.10.0.1/16"
net_profile_service_cidr = "10.0.0.0/16"
depends_on = [module.network]
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "aks-resource-group"
location = "eastus"
}
module "aks" {
source = "Azure/aks/azurerm"
resource_group_name = azurerm_resource_group.example.name
prefix = "prefix"
}
The module supports some outputs that may be used to configure a kubernetes provider after deploying an AKS cluster.
provider "kubernetes" {
host = module.aks.host
client_certificate = base64decode(module.aks.client_certificate)
client_key = base64decode(module.aks.client_key)
cluster_ca_certificate = base64decode(module.aks.cluster_ca_certificate)
}
We provide 2 ways to build, run, and test the module on a local development machine. Native (Mac/Linux) or Docker.
We provide simple script to quickly set up module development environment:
$ curl -sSL https://raw.githubusercontent.com/Azure/terramodtest/master/tool/env_setup.sh | sudo bash
Then simply run it in local shell:
$ cd $GOPATH/src/{directory_name}/
$ bundle install
# set service principal
$ export ARM_CLIENT_ID="service-principal-client-id"
$ export ARM_CLIENT_SECRET="service-principal-client-secret"
$ export ARM_SUBSCRIPTION_ID="subscription-id"
$ export ARM_TENANT_ID="tenant-id"
$ export ARM_TEST_LOCATION="eastus"
$ export ARM_TEST_LOCATION_ALT="eastus2"
$ export ARM_TEST_LOCATION_ALT2="westus"
# set aks variables
$ export TF_VAR_client_id="service-principal-client-id"
$ export TF_VAR_client_secret="service-principal-client-secret"
# run test
$ rake build
$ rake full
We provide a Dockerfile to build a new image based FROM
the mcr.microsoft.com/terraform-test
Docker hub image which adds additional tools / packages specific for this module (see Custom Image section). Alternatively use only the microsoft/terraform-test
Docker hub image by using these instructions.
This builds the custom image:
$ docker build --build-arg BUILD_ARM_SUBSCRIPTION_ID=$ARM_SUBSCRIPTION_ID --build-arg BUILD_ARM_CLIENT_ID=$ARM_CLIENT_ID --build-arg BUILD_ARM_CLIENT_SECRET=$ARM_CLIENT_SECRET --build-arg BUILD_ARM_TENANT_ID=$ARM_TENANT_ID -t azure-aks .
This runs the build and unit tests:
$ docker run --rm azure-aks /bin/bash -c "bundle install && rake build"
This runs the end to end tests:
$ docker run --rm azure-aks /bin/bash -c "bundle install && rake e2e"
This runs the full tests:
$ docker run --rm azure-aks /bin/bash -c "bundle install && rake full"
Originally created by Damien Caro and Malte Lantin
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Name | Version |
---|---|
terraform | >= 0.12 |
azurerm | ~> 3.3 |
Name | Version |
---|---|
azurerm | ~> 3.3 |
Name | Source | Version |
---|---|---|
ssh-key | ./modules/ssh-key | n/a |
Name | Type |
---|---|
azurerm_kubernetes_cluster.main | resource |
azurerm_log_analytics_solution.main | resource |
azurerm_log_analytics_workspace.main | resource |
azurerm_resource_group.main | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
admin_username | The username of the local administrator to be created on the Kubernetes cluster | string |
"azureuser" |
no |
agents_availability_zones | (Optional) A list of Availability Zones across which the Node Pool should be spread. Changing this forces a new resource to be created. | list(string) |
null |
no |
agents_count | The number of Agents that should exist in the Agent Pool. Please set agents_count null while enable_auto_scaling is true to avoid possible agents_count changes. |
number |
2 |
no |
agents_labels | (Optional) A map of Kubernetes labels which should be applied to nodes in the Default Node Pool. Changing this forces a new resource to be created. | map(string) |
{} |
no |
agents_max_count | Maximum number of nodes in a pool | number |
null |
no |
agents_max_pods | (Optional) The maximum number of pods that can run on each agent. Changing this forces a new resource to be created. | number |
null |
no |
agents_min_count | Minimum number of nodes in a pool | number |
null |
no |
agents_pool_name | The default Azure AKS agentpool (nodepool) name. | string |
"nodepool" |
no |
agents_size | The default virtual machine size for the Kubernetes agents | string |
"Standard_D2s_v3" |
no |
agents_tags | (Optional) A mapping of tags to assign to the Node Pool. | map(string) |
{} |
no |
agents_type | (Optional) The type of Node Pool which should be created. Possible values are AvailabilitySet and VirtualMachineScaleSets. Defaults to VirtualMachineScaleSets. | string |
"VirtualMachineScaleSets" |
no |
client_id | (Optional) The Client ID (appId) for the Service Principal used for the AKS deployment | string |
"" |
no |
client_secret | (Optional) The Client Secret (password) for the Service Principal used for the AKS deployment | string |
"" |
no |
cluster_log_analytics_workspace_name | (Optional) The name of the Analytics workspace | string |
null |
no |
cluster_name | (Optional) The name for the AKS resources created in the specified Azure Resource Group. This variable overwrites the 'prefix' var (The 'prefix' var will still be applied to the dns_prefix if it is set) | string |
null |
no |
enable_auto_scaling | Enable node pool autoscaling | bool |
false |
no |
enable_azure_policy | Enable Azure Policy Addon. | bool |
false |
no |
enable_host_encryption | Enable Host Encryption for default node pool. Encryption at host feature must be enabled on the subscription: https://docs.microsoft.com/azure/virtual-machines/linux/disks-enable-host-based-encryption-cli | bool |
false |
no |
enable_http_application_routing | Enable HTTP Application Routing Addon (forces recreation). | bool |
false |
no |
enable_ingress_application_gateway | Whether to deploy the Application Gateway ingress controller to this Kubernetes Cluster? | bool |
null |
no |
enable_log_analytics_workspace | Enable the creation of azurerm_log_analytics_workspace and azurerm_log_analytics_solution or not | bool |
true |
no |
enable_node_public_ip | (Optional) Should nodes in this Node Pool have a Public IP Address? Defaults to false. | bool |
false |
no |
enable_role_based_access_control | Enable Role Based Access Control. | bool |
false |
no |
identity_ids | (Optional) Specifies a list of User Assigned Managed Identity IDs to be assigned to this Kubernetes Cluster. | list(string) |
null |
no |
identity_type | (Optional) The type of identity used for the managed cluster. Conflict with client_id and client_secret . Possible values are SystemAssigned and UserAssigned . If UserAssigned is set, a user_assigned_identity_id must be set as well. |
string |
"SystemAssigned" |
no |
ingress_application_gateway_id | The ID of the Application Gateway to integrate with the ingress controller of this Kubernetes Cluster. | string |
null |
no |
ingress_application_gateway_name | The name of the Application Gateway to be used or created in the Nodepool Resource Group, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. | string |
null |
no |
ingress_application_gateway_subnet_cidr | The subnet CIDR to be used to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. | string |
null |
no |
ingress_application_gateway_subnet_id | The ID of the subnet on which to create an Application Gateway, which in turn will be integrated with the ingress controller of this Kubernetes Cluster. | string |
null |
no |
kubernetes_version | Specify which Kubernetes release to use. The default used is the latest Kubernetes version available in the region | string |
null |
no |
location | Location of cluster, if not defined it will be read from the resource-group | string |
null |
no |
log_analytics_workspace_sku | The SKU (pricing level) of the Log Analytics workspace. For new subscriptions the SKU should be set to PerGB2018 | string |
"PerGB2018" |
no |
log_retention_in_days | The retention period for the logs in days | number |
30 |
no |
net_profile_dns_service_ip | (Optional) IP address within the Kubernetes service address range that will be used by cluster service discovery (kube-dns). Changing this forces a new resource to be created. | string |
null |
no |
net_profile_docker_bridge_cidr | (Optional) IP address (in CIDR notation) used as the Docker bridge IP address on nodes. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_outbound_type | (Optional) The outbound (egress) routing method which should be used for this Kubernetes Cluster. Possible values are loadBalancer and userDefinedRouting. Defaults to loadBalancer. | string |
"loadBalancer" |
no |
net_profile_pod_cidr | (Optional) The CIDR to use for pod IP addresses. This field can only be set when network_plugin is set to kubenet. Changing this forces a new resource to be created. | string |
null |
no |
net_profile_service_cidr | (Optional) The Network Range used by the Kubernetes service. Changing this forces a new resource to be created. | string |
null |
no |
network_plugin | Network plugin to use for networking. | string |
"kubenet" |
no |
network_policy | (Optional) Sets up network policy to be used with Azure CNI. Network policy allows us to control the traffic flow between pods. Currently supported values are calico and azure. Changing this forces a new resource to be created. | string |
null |
no |
node_resource_group | The auto-generated Resource Group which contains the resources for this Managed Kubernetes Cluster. | string |
null |
no |
orchestrator_version | Specify which Kubernetes release to use for the orchestration layer. The default used is the latest Kubernetes version available in the region | string |
null |
no |
os_disk_size_gb | Disk size of nodes in GBs. | number |
50 |
no |
os_disk_type | The type of disk which should be used for the Operating System. Possible values are Ephemeral and Managed . Defaults to Managed . Changing this forces a new resource to be created. |
string |
"Managed" |
no |
prefix | (Required) The prefix for the resources created in the specified Azure Resource Group | string |
n/a | yes |
private_cluster_enabled | If true cluster API server will be exposed only on internal IP address and available only in cluster vnet. | bool |
false |
no |
public_ssh_key | A custom ssh key to control access to the AKS cluster | string |
"" |
no |
rbac_aad_admin_group_object_ids | Object ID of groups with admin access. | list(string) |
null |
no |
rbac_aad_client_app_id | The Client ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_managed | Is the Azure Active Directory integration Managed, meaning that Azure will create/manage the Service Principal used for integration. | bool |
false |
no |
rbac_aad_server_app_id | The Server ID of an Azure Active Directory Application. | string |
null |
no |
rbac_aad_server_app_secret | The Server Secret of an Azure Active Directory Application. | string |
null |
no |
resource_group_name | The resource group name to be imported | string |
n/a | yes |
sku_tier | The SKU Tier that should be used for this Kubernetes Cluster. Possible values are Free and Paid | string |
"Free" |
no |
tags | Any tags that should be present on the AKS cluster resources | map(string) |
{} |
no |
vnet_subnet_id | (Optional) The ID of a Subnet where the Kubernetes Node Pool should exist. Changing this forces a new resource to be created. | string |
null |
no |
Name | Description |
---|---|
addon_profile | n/a |
admin_client_certificate | n/a |
admin_client_key | n/a |
admin_cluster_ca_certificate | n/a |
admin_host | n/a |
admin_password | n/a |
admin_username | n/a |
aks_id | n/a |
client_certificate | n/a |
client_key | n/a |
cluster_ca_certificate | n/a |
host | n/a |
http_application_routing_zone_name | n/a |
kube_admin_config_raw | n/a |
kube_config_raw | n/a |
kubelet_identity | n/a |
location | n/a |
node_resource_group | n/a |
password | n/a |
system_assigned_identity | n/a |
username | n/a |