tfp-automation
is a framework designed to test various Rancher2 Terraform provider resources to be tested with Terratest + Go. While this is not meant to serve as a 1:1 partiy with the existing test cases in rancher/rancher
, the overall structure of the tests is. This is to ensure that adoption of the framework is as seamless as possible.
When testing locally, it is required to set the RANCHER2_PROVIDER_VERSION
, as type string
, and formatted without a leading v
.
rancher:
# define rancher specific configs here
terraform:
# define module specific configs here
terratest:
# define test specific configs here
🔺 Back to top
The rancher
configurations in the cattle-config.yaml
will remain consistent across all modules and tests. Fields to configure in this section are as follows:
Field | Description | Type | Example |
---|---|---|---|
host | url to rancher sercer without leading https:// and without trailing / | string | url-to-rancher-server.com |
adminToken | rancher admin bearer token | string | token-XXXXX:XXXXXXXXXXXXXXX |
insecure | must be set to true | boolean | true |
cleanup | If true, resources will be cleaned up upon test completion | boolean | true |
rancher:
host: url-to-rancher-server.com
adminToken: token-XXXXX:XXXXXXXXXXXXXXX
insecure: true
cleanup: true
🔺 Back to top
The terraform
configurations in the cattle-config.yaml
are module specific. Fields to configure vary per module. Below are generic fields that are applicable regardless of module. See them below:
terraform:
etcd: # This is an optional block.
disableSnapshot: false
snapshotCron: "0 */5 * * *"
snapshotRetention: 6
s3:
bucket: ""
cloudCredentialName: ""
endpoint: ""
endpointCA: ""
folder: ""
region: ""
skipSSLVerify: true
etcdRKE1: # This is an optional block
backupConfig:
enabled: true
intervalHours: 12
safeTimestamp: true
timeout: 120
s3BackupConfig:
accessKey: ""
bucketName: ""
endpoint: ""
folder: ""
region: ""
secretKey: ""
retention: "72h"
snapshot: false
cloudCredentialName: ""
defaultClusterRoleForProjectMembers: "true" # Can be "true" or "false"
enableNetworkPolicy: false # Can be true or false
hostnamePrefix: ""
machineConfigName: "" # RKE2/K3S specific
networkPlugin: "" # RKE1 specific
nodeTemplateName: "" # RKE1 specific
privateRegistries: # This is an optional block. You must already have a private registry stood up
engineInsecureRegistry: "" # RKE1 specific
url: ""
systemDefaultRegistry: "" # RKE2/K3S specific
username: "" # RKE1 specific
password: "" # RKE1 specific
insecure: true
authConfigSecretName: "" # RKE2/K3S specific. Secret must be created in the fleet-default namespace already
Note: At this time, private registries for RKE2/K3s MUST be used with provider version 3.1.1. This is due to issue rancher/terraform-provider-rancher2#1305.
Module specific fields to configure in the terraform section are as follows:
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | aks |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-aks |
clientID | provide azure client id | string | XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
clientSecret | provide azure client secret | string | XXXXXXXXXXXXXXXXXXXXXXXXXX |
subscriptionID | provide azure subscription id | string | XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
resourceGroup | provide an existing resource group from Azure | string | my-resource-group |
resourceLocation | provide location for Azure instances | string | eastus |
hostnamePrefix | provide a unique hostname prefix for resources | string | tfp |
networkPlugin | provide network plugin | string | kubenet |
availabilityZones | list of availablilty zones | []string |
- '1' - '2' - '3' |
osDiskSizeGB | os disk size in gigabytes | int64 | 128 |
vmSize | vm size to be used for instances | string | Standard_DS2_v2 |
terraform:
module: aks
cloudCredentialName: tf-aks
azureConfig:
clientID: ""
clientSecret: ""
subscriptionID: ""
resourceGroup: ""
resourceLocation: eastus
availabilityZones:
- '1'
- '2'
- '3'
osDiskSizeGB: 128
tenantId: ""
vmSize: Standard_DS2_v2
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | eks |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-eks |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
awsInstanceType | provide aws instance type | string | t3.medium |
region | provide a region for resources to be created in | string | us-east-2 |
awsSubnets | list of valid subnet IDs | []string |
- subnet-xxxxxxxx - subnet-yyyyyyyy - subnet-zzzzzzzz |
awsSecurityGroups | list of security group IDs to be applied to AWS instances | []string | - sg-xxxxxxxxxxxxxxxxx |
hostnamePrefix | provide a unique hostname prefix for resources | string | tfp |
publicAccess | If true, public access will be enabled | boolean | true |
privateAccess | If true, private access will be enabled | boolean | true |
nodeRole | Optional with Rancher v2.7+ - if provided, this custom role will be used when creating instances for node groups | string | arn:aws:iam::############:role/my-custom-NodeInstanceRole-############ |
terraform:
module: eks
cloudCredentialName: tf-eks
hostnamePrefix: tfp
awsConfig:
awsAccessKey: ""
awsSecretKey: ""
awsInstanceType: t3.medium
region: us-east-2
awsSubnets:
- ""
- ""
awsSecurityGroups:
- ""
publicAccess: true
privateAccess: true
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | gke |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-gke |
region | provide region for resources to be created in | string | us-central1-c |
projectID | provide gke project ID | string | my-project-id-here |
network | specify network here | string | default |
subnetwork | specify subnetwork here | string | default |
hostnamePrefix | provide a unique hostname prefix for resources | string | tfp |
terraform:
module: gke
cloudCredentialName: tf-creds-gke
hostnamePrefix: tfp
googleConfig:
authEncodedJson: |-
{
"type": "service_account",
"project_id": "",
"private_key_id": "",
"private_key": "",
"client_email": "",
"client_id": "",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": ""
}
region: us-central1-c
projectID: ""
network: default
subnetwork: default
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke1 |
availabilitySet | provide availability set to put virtual machine in | string | docker-machine |
clientId | provide client ID | string | '' |
clientSecret | provide client secret | string | '' |
subscriptionId | provide subscription ID | string | '' |
customData | provide path to file | string | '' |
diskSize | disk size if using managed disk | string | 100 |
dockerPort | port number for Docker engine | string | 2376 |
environment | Azure environment | string | AzurePublicCloud |
faultDomainCount | fault domain count to use for availability set | string | 3 |
image | Azure virtual machine OS image | string | Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest |
location | Azure region to create virtual machines | string | eastus2 |
managedDisks | configures VM and availability set for managed disks | bool | false |
noPublicIp | do not create a public IP address for the machine | bool | false |
openPort | make the specified port number accessible from the Internet | list | false |
privateIpAddress | specify a static private IP address for the machine | bool | false |
resourceGroup | provide a Azure resource group | string | docker-machine |
size | size for Azure virtual machine | string | Standard_A2 |
sshUser | ssh username | string | '' |
staticPublicIp | Assign a static public IP address to the machine | bool | false |
storageType | type of Storage Account to host the OS Disk for the machine | string | Standard_LRS |
updateDomainCount | update domain count to use for availability set | string | 3 |
terraform:
module: azure_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
azureCredentials:
clientId: ""
clientSecret: ""
environment: "AzurePublicCloud"
subscriptionId: ""
tenantId: ""
azureConfig:
availabilitySet: "docker-machine"
subscriptionId: ""
customData: ""
diskSize: "100"
dockerPort: "2376"
faultDomainCount: "3"
image: "Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest"
location: "westus2"
managedDisks: false
noPublicIp: false
openPort: ["6443/tcp","2379/tcp","2380/tcp","8472/udp","4789/udp","9796/tcp","10256/tcp","10250/tcp","10251/tcp","10252/tcp"]
privateIpAddress: ""
resourceGroup: ""
size: "Standard_D2_v2"
sshUser: "azureuser"
staticPublicIp: false
storageType: "Standard_LRS"
updateDomainCount: "5"
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke1 |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
ami | provide ami; (optional - may be left as empty string '') | string | '' |
awsInstanceType | provide aws instance type | string | t3.medium |
region | provide a region for resources to be created in | string | us-east-2 |
awsSecurityGroupNames | list of security groups to be applied to AWS instances | []string | - security-group-name |
awsSubnetID | provide a valid subnet ID | string | subnet-xxxxxxxx |
awsVpcID | provide a valid VPC ID | string | vpc-xxxxxxxx |
awsZoneLetter | provide zone letter to be used | string | a |
awsRootSize | root size in gigabytes | int64 | 80 |
networkPlugin | provide network plugin to be used | string | canal |
nodeTemplateName | provide a unique name for node template | string | tf-rke1-template |
hostnamePrefix | provide a unique hostname prefix for resources | string | tfp |
terraform:
module: ec2_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
awsCredentials:
awsAccessKey: ""
awsSecretKey: ""
awsConfig:
ami:
awsInstanceType: t3.medium
region: us-east-2
awsSecurityGroupNames:
- security-group-name
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
awsRootSize: 80
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | linode_rke1 |
linodeToken | provide linode token credential | string | XXXXXXXXXXXXXXXXXXXX |
region | provide a region for resources to be created in | string | us-east |
linodeRootPass | provide a unique root password | string | xxxxxxxxxxxxxxxx |
networkPlugin | provide network plugin to be used | string | canal |
nodeTemplateName | provide a unique name for node template | string | tf-rke1-template |
hostnamePrefix | provide a unique hostname prefix for resources | string | tfp |
terraform:
module: linode_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
linodeCredentials:
linodeToken: ""
linodeConfig:
region: us-east
linodeRootPass: ""
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke1 |
cfgparam | vSphere vm configuration parameters | list | '' |
cloneFrom | name of what VM you want to clone | string | '' |
cloudConfig | cloud config YAML content to inject as user-data | string | '' |
contentLibrary | specify the name of the library | string | '' |
cpuCount | vSphere CPU number for docker VM | string | 2 |
creationType | disk size if using managed disk | string | 100 |
datacenter | vSphere datacenter for docker VM | string | '' |
datastore | vSphere datastore for docker VM | string | '' |
datastoreCluster | fvSphere datastore cluster for VM | string | 3 |
diskSize | vSphere size of disk for docker VM (in MB) | string | 2048 |
folder | vSphere folder for the docker VM | string | '' |
hostsystem | vSphere compute resource where the docker VM will be instantiated | string | '' |
memorySize | vSphere size of memory for docker VM (in MB) | string | '' |
network | vSphere network where the docker VM will be attached | list | '' |
password | specify the vSphere password | string | staff |
pool | vSphere resource pool for docker VM | string | '' |
sshPassword | specify the ssh password | string | tc_user |
sshPort | specify the ssh port | string | 22 |
sshUser | specify the ssh user | string | docker |
sshUserGroup | specify the ssh user group | string | staff |
username | specify the vSphere username | string | staff |
vcenter | specify the vcenter | string | staff |
vcenterPort | specify the vcenter port | string | 44 |
terraform:
module: vsphere_rke1
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
vsphereCredentials:
password: ""
username: ""
vcenter: ""
vcenterPort: "443"
vsphereConfig:
cfgparam: ["disk.enableUUID=TRUE"]
cloneFrom: ""
cloudConfig: ""
contentLibrary: ""
cpuCount: "4"
creationType: "template"
datacenter: ""
datastore: ""
datastoreCluster: ""
diskSize: "40000"
folder: ""
hostsystem: ""
memorySize: "8192"
network: [""]
pool: ""
sshPassword: "tcuser"
sshPort: "22"
sshUser: "docker"
sshUserGroup: "staff"
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke1 |
availabilitySet | provide availability set to put virtual machine in | string | docker-machine |
clientId | provide client ID | string | '' |
clientSecret | provide client secret | string | '' |
subscriptionId | provide subscription ID | string | '' |
customData | provide path to file | string | '' |
diskSize | disk size if using managed disk | string | 100 |
dockerPort | port number for Docker engine | string | 2376 |
environment | Azure environment | string | AzurePublicCloud |
faultDomainCount | fault domain count to use for availability set | string | 3 |
image | Azure virtual machine OS image | string | Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest |
location | Azure region to create virtual machines | string | eastus2 |
managedDisks | configures VM and availability set for managed disks | bool | false |
noPublicIp | do not create a public IP address for the machine | bool | false |
openPort | make the specified port number accessible from the Internet | list | false |
privateIpAddress | specify a static private IP address for the machine | bool | false |
resourceGroup | provide a Azure resource group | string | docker-machine |
size | size for Azure virtual machine | string | Standard_A2 |
sshUser | ssh username | string | '' |
staticPublicIp | Assign a static public IP address to the machine | bool | false |
storageType | type of Storage Account to host the OS Disk for the machine | string | Standard_LRS |
tenantId | provide the tenant ID | string | '' |
updateDomainCount | update domain count to use for availability set | string | 3 |
terraform:
module: azure_k3s
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
azureCredentials:
clientId: ""
clientSecret: ""
environment: "AzurePublicCloud"
subscriptionId: ""
tenantId: ""
azureConfig:
availabilitySet: "docker-machine"
customData: ""
diskSize: "100"
dockerPort: "2376"
faultDomainCount: "3"
image: "Canonical:0001-com-ubuntu-server-jammy:22_04-lts:latest"
location: "westus2"
managedDisks: false
noPublicIp: false
openPort: ["6443/tcp","2379/tcp","2380/tcp","8472/udp","4789/udp","9796/tcp","10256/tcp","10250/tcp","10251/tcp","10252/tcp"]
privateIpAddress: ""
resourceGroup: ""
size: "Standard_D2_v2"
sshUser: ""
staticPublicIp: false
storageType: "Standard_LRS"
updateDomainCount: "5"
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke2 |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-creds-rke2 |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
ami | provide ami; (optional - may be left as empty string '') | string | '' |
region | provide a region for resources to be created in | string | us-east-2 |
awsSecurityGroupNames | list of security groups to be applied to AWS instances | []string | - my-security-group |
awsSubnetID | provide a valid subnet ID | string | subnet-xxxxxxxx |
awsVpcID | provide a valid VPC ID | string | vpc-xxxxxxxx |
awsZoneLetter | provide zone letter to be used | string | a |
machineConfigName | provide a unique name for machine config | string | tf-rke2 |
enableNetworkPolicy | If true, Network Policy will be enabled | boolean | false |
defaultClusterRoleForProjectMembers | select default role to be used for project memebers | string | user |
terraform:
module: ec2_rke2
cloudCredentialName: tf-creds-rke2
machineConfigName: tf-rke2
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
awsCredentials:
awsAccessKey: ""
awsSecretKey: ""
awsConfig:
ami:
region: us-east-2
awsSecurityGroupNames:
- my-security-group
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | linode_k3s |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-linode |
linodeToken | provide linode token credential | string | XXXXXXXXXXXXXXXXXXXX |
linodeImage | specify image to be used for instances | string | linode/ubuntu20.04 |
region | provide a region for resources to be created in | string | us-east |
linodeRootPass | provide a unique root password | string | xxxxxxxxxxxxxxxx |
machineConfigName | provide a unique name for machine config | string | tf-k3s |
enableNetworkPolicy | If true, Network Policy will be enabled | boolean | false |
defaultClusterRoleForProjectMembers | select default role to be used for project memebers | string | user |
terraform:
module: linode_k3s
cloudCredentialName: tf-linode-creds
machineConfigName: tf-k3s
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
linodeCredentials:
linodeToken: ""
linodeConfig:
linodeImage: linode/ubuntu20.04
region: us-east
linodeRootPass: xxxxxxxxxxxx
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
module | specify terraform module here | string | ec2_rke1 |
cfgparam | vSphere vm configuration parameters | list | '' |
cloneFrom | name of what VM you want to clone | string | '' |
cloudConfig | cloud config YAML content to inject as user-data | string | '' |
contentLibrary | specify the name of the library | string | '' |
cpuCount | vSphere CPU number for docker VM | string | 2 |
creationType | disk size if using managed disk | string | 100 |
datacenter | vSphere datacenter for docker VM | string | '' |
datastore | vSphere datastore for docker VM | string | '' |
datastoreCluster | fvSphere datastore cluster for VM | string | 3 |
diskSize | vSphere size of disk for docker VM (in MB) | string | 2048 |
folder | vSphere folder for the docker VM | string | '' |
hostsystem | vSphere compute resource where the docker VM will be instantiated | string | '' |
memorySize | vSphere size of memory for docker VM (in MB) | string | '' |
network | vSphere network where the docker VM will be attached | list | '' |
password | specify the vSphere password | string | staff |
pool | vSphere resource pool for docker VM | string | '' |
sshPassword | specify the ssh password | string | tc_user |
sshPort | specify the ssh port | string | 22 |
sshUser | specify the ssh user | string | docker |
sshUserGroup | specify the ssh user group | string | staff |
username | specify the vSphere username | string | staff |
vcenter | specify the vcenter | string | staff |
vcenterPort | specify the vcenter port | string | 44 |
terraform:
module: vsphere_k3s
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: tfp
vsphereCredentials:
password: ""
username: ""
vcenter: ""
vcenterPort: ""
vsphereConfig:
cfgparam: ["disk.enableUUID=TRUE"]
cloneFrom: ""
cloudConfig: ""
contentLibrary: ""
cpuCount: "4"
creationType: "template"
datacenter: ""
datastore: ""
datastoreCluster: ""
diskSize: "40000"
folder: ""
hostsystem: ""
memorySize: "8192"
network: [""]
pool: ""
sshPassword: "tcuser"
sshPort: "22"
sshUser: "docker"
sshUserGroup: "staff"
🔺 Back to top
The terratest
configurations in the cattle-config.yaml
are test specific. Fields to configure vary per test. The nodepools
field in the below configurations will vary depending on the module. I will outline what each module expects first, then proceed to show the whole test specific configurations.
🔺 Back to top
type: []Nodepool
🔺 Back to top
AKS nodepools only need the quantity
of nodes per pool to be provided, of type int64
. The below example will create a cluster with three node pools, each with a single node.
nodepools:
- quantity: 1
- quantity: 1
- quantity: 1
🔺 Back to top
EKS nodepools require the instanceType
, as type string
, the desiredSize
of the nodepool, as type int64
, the maxSize
of the node pool, as type int64
, and the minSize
of the node pool, as type int64
. The minimum requirement for an EKS nodepool's desiredSize
is 2
. This must be respected or the cluster will fail to provision.
nodepools:
- instanceType: t3.medium
desiredSize: 3
maxSize: 3
minSize: 0
🔺 Back to top
GKE nodepools require the quantity
of the node pool, as type int64
, and the maxPodsContraint
, as type int64
.
nodepools:
- quantity: 2
maxPodsContraint: 110
🔺 Back to top
For these modules, the required nodepool fields are the quantity
, as type int64
, as well as the roles to be assigned, each to be set or toggled via boolean - [etcd
, controlplane
, worker
]. The following example will create three node pools, each with individual roles, and one node per pool.
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
That wraps up the sub-section on nodepools, circling back to the test specific configs now...
Test specific fields to configure in this section are as follows:
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
kubernetesVersion | specify the kubernetes version to be used | string | view yaml below for all module specific expected k8s version formats |
nodeCount | provide the expected initial node count | int64 | 3 |
# this example is valid for RKE1 provision
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
kubernetesVersion: ""
nodeCount: 3
# Below are the expected formats for all module kubernetes versions
# AKS without leading v
# e.g. '1.28.5'
# EKS without leading v or any tail ending
# e.g. '1.28'
# GKE without leading v but with tail ending included
# e.g. 1.28.7-gke.1026000
# RKE1 with leading v and -rancher1-1 tail
# e.g. v1.28.7-rancher1-1
# RKE2 with leading v and +rke2r# tail
# e.g. v1.28.7+rke2r1
# K3S with leading v and +k3s# tail
# e.g. v1.28.7+k3s1
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
scaledUpNodepools | provide nodepool configs to be scaled up to, after initial provisioning | []Nodepool | view section on nodepools above or example yaml below |
scaledDownNodepools | provide nodepool configs to be scaled down to, after scaling up cluster | []Nodepool | view section on nodepools above or example yaml below |
kubernetesVersion | specify the kubernetes version to be used | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
nodeCount | provide the expected initial node count | int64 | 3 |
scaledUpNodeCount | provide the expected node count of scaled up cluster | int64 | 8 |
scaledDownNodeCount | provide the expected node count of scaled down cluster | int64 | 6 |
# this example is valid for RKE1 scale
terratest:
kubernetesVersion: ""
nodeCount: 3
scaledUpNodeCount: 8
scaledDownNodeCount: 6
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
scalingInput:
scaledUpNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 3
etcd: false
controlplane: false
worker: true
scaledDownNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
nodeCount | provide the expected initial node count | int64 | 3 |
kubernetesVersion | specify the kubernetes version to be used | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
upgradedKubernetesVersion | specify the kubernetes version to be upgraded to | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
# this example is valid for K3s kubernetes upgrade
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
nodeCount: 3
kubernetesVersion: ""
upgradedKubernetesVersion: ""
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
🔺 Back to top
Field | Description | Type | Example |
---|---|---|---|
snapshotInput | Block in which to define snapshot parameters | Snapshots | view section on snapshotInput in example yaml below |
snapshotRestore | provide the snapshot restore option (none, kubernetesVersion or all) | string | none |
upgradeKubernetesVersion | specify the kubernetes version to be upgraded to | string | view section on snapshotInput in example yaml below |
controlPlaneConcurrencyValue | specify the control plane concurrency value used when upgrading | string | view section on snapshotInput in example yaml below |
workerConcurrencyValue | specify the worker plane concurrency value used when upgrading RKE2/K3s clusters | string | view section on snapshotInput in example yaml below |
terratest:
snapshotInput:
snapshotRestore: "none"
upgradeKubernetesVersion: ""
controlPlaneConcurrencyValue: "15%"
workerConcurrencyValue: "20%"
Note: In this test suite, Terraform explicitly cleans up resources after each test case is performed. This is because Terraform will experience caching issues, causing tests to fail.
🔺 Back to top
Build module test may be used and ran to create a main.tf terraform configuration file for the desired module. This is logged to the output for future reference and use.
Testing configurations for this are the same as outlined in provisioning test above. Please review provisioning test configurations for more details.
🔺 Back to top
Cleanup test may be used to clean up resources in situations where rancher config has cleanup
set to false
. This may be helpful in debugging. This test expects the same configurations used to initially create this environment, to properly clean them up.