/terraform-openstack-rke2

Deploy Kubernetes on OpenStack with RKE2

Primary LanguageHCLMozilla Public License 2.0MPL-2.0

terraform-openstack-rke2

Terraform Registry test-fast test-full

Terraform module to deploy Kubernetes with RKE2 on OpenStack.

Unlike RKE version this module is not opinionated and let you configure everything via RKE2 configuration file.

Prerequisites

Features

  • HA controlplane
  • Multiple agent node pools
  • Upgrade mechanism

Examples

See examples directory.

Documentation

See USAGE.md for all available options.

Keypair

You can either specify a ssh key file to generate new keypair via ssh_key_file (default) or specify already existent keypair via ssh_keypair_name.

Warning

Default config will try to use ssh agent for ssh connections to the nodes. Add use_ssh_agent = false if you don't use it.

Secgroup

You can define your own rules (e.g. limiting port 22 and 6443 to admin box).

secgroup_rules      = [ { "source" = "x.x.x.x", "protocol" = "tcp", "port" = 22 },
                        { "source" = "x.x.x.x", "protocol" = "tcp", "port" = 6443 },
                        { "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 80 },
                        { "source" = "0.0.0.0/0", "protocol" = "tcp", "port" = 443}
                      ]

Nodes affinity

You can set affinity policy for controlplane and each nodes pool server_group_affinity. Default is soft-anti-affinity.

Warning

soft-anti-affinity and soft-affinity needs Compute service API 2.15 or above.

Boot from volume

Some providers require to boot the instances from an attached boot volume instead of the nova ephemeral volume. To enable this feature, provide the variables to the config file. You can use different value for server and agent nodes.

boot_from_volume = true
boot_volume_size = 20
boot_volume_type = "rbd-1"

Kubernetes version

You can specify rke2 version with rke2_version variables. Refer to RKE2 supported version.

Upgrade by setting the target version via rke2_version and do_upgrade = true. It will upgrade the nodes one-by-one, server nodes first.

Warning

In-place upgrade mechanism is not battle-tested and relies on Terraform provisioners.

Addons

Set the manifests_path variable to point out the directory containing your manifests and HelmChart (see JupyterHub example).

If you need a template step for your manifests, you can use manifests_gzb64 (see cinder-csi-plugin example).

Warning

Modifications made to manifests after cluster deployement wont have any effect.

Additional server config files

Set the additional_configs_path variable to the directory containing your additional rke2 server configs. (see the Audit Policy example)

If you need a template step for your config files, you can use additional_configs_gzb64.

Warning

Modifications made to manifests after cluster deployement wont have any effect.

Downscale

You need to manually drain and remove node before downscaling a pool nodes.

You can tell the module to output kubernetes config by setting output_kubernetes_config = true.

Warning

Interpolating provider variables from module output is not the recommended way to achieve integration. See here and here.

Use of a data sources is recommended.

(Not recommended) You can use this module to populate Terraform Kubernetes Provider :

provider "kubernetes" {
  host     = module.controlplane.kubernetes_config.host
  client_certificate     = module.controlplane.kubernetes_config.client_certificate
  client_key             = module.controlplane.kubernetes_config.client_key
  cluster_ca_certificate = module.controlplane.kubernetes_config.cluster_ca_certificate
}

Recommended way needs two apply operations, and setting the proper terraform_remote_state data source :

provider "kubernetes" {
  host     = data.terraform_remote_state.rke2.outputs.kubernetes_config.host
  client_certificate     = data.terraform_remote_state.rke2.outputs.kubernetes_config.client_certificate
  client_key             = data.terraform_remote_state.rke2.outputs.kubernetes_config.client_key
  cluster_ca_certificate = data.terraform_remote_state.rke2.outputs.kubernetes_config.cluster_ca_certificate
}

Node lifecycle Assumptions

Note

Changes to certain module arguments will intentionally not cause the recreation of instances.

To provide users a better and more manageable experience, several arguments have been included in the instance's ignore_changes lifecycle. You must manually taint the instance for force the recreation of the resource :

terraform taint 'module.controlplane.module.server.openstack_compute_instance_v2.instance'

Proxy

You can specify a proxy via proxy_url variable. Private address ranges are automatically excluded, you can add more addresses via no_proxy variable. You might want to add you organization's DNS domain (that of the Keystone OpenStack API endpoint).