Update terraform-plugin-sdk to v2
Closed this issue · 4 comments
The provider currently uses github.com/hashicorp/terraform-plugin-sdk@v1.17.2
. v2 provides noticeable improvements, most notably the ability to debug the provider from within an IDE. There is a draft PR already up: #347
This is effectively an upgrade of the entire rke terraform provider, so this should be tested as much as possible. There are no direct test cases for this issue, but it will likely be covered via normal testing as part of the release process.
@jakefhyde Customers are requesting TFP rke 1.4.2 for k8s 1.25 clusters which would be out of band from the next rancher release if I do it now. Should we wait to release rke 1.4.2 with rancher q2 2.7.x so that the pipeline of normal testing as part of the release process is done?
@Josh-Diamond Last comment is outdated! TFP RKE v1.4.2 has already been released and k8s 1.25 clusters are supported. From what we know, the TFP RKE provider is still functioning as expected per testing from other issues such as #403 so please close this issue out.
Ticket #353 - Test Results - ✅
With tfp-rke v1.4.2
:
Scenario | Test Case | Result |
---|---|---|
1. | Provision a downstream RKE cluster w/ k8s v1.25.9-rancher2-2 |
✅ |
2. | Using cluster from Scenario 1, upgrade k8s version to v1.26.4-rancher2-1 |
✅ |
3. | Using cluster from Scenario 2, add an additional worker node to the cluster | ✅ |
Scenario 1a -
- Using the
main.tf
provided below, provision a downstream RKE cluster using k8sv1.25.9-rancher2-2
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.4.2"
}
}
}
provider "rke" {
# Configuration options
}
resource "rke_cluster" "rke_cluster" {
kubernetes_version = "v1.25.9-rancher2-2"
enable_cri_dockerd = true
nodes {
address = "<REDACTED>"
user = "ubuntu"
role = ["etcd", "controlplane", "worker"]
ssh_key = file("<REDACTED>")
}
services {
kube_api {
pod_security_policy = false
}
}
}
- Verified - cluster successfully provisions; as expected
Scenario 2 -
- Using same env + cluster from Scenario 1, upgrade the cluster's k8s version by running the following
main.tf
file
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.4.2"
}
}
}
provider "rke" {
# Configuration options
}
resource "rke_cluster" "rke_cluster" {
kubernetes_version = "v1.26.4-rancher2-1"
enable_cri_dockerd = true
nodes {
address = "<REDACTED>"
user = "ubuntu"
role = ["etcd", "controlplane", "worker"]
ssh_key = file("<REDACTED>")
}
services {
kube_api {
pod_security_policy = false
}
}
}
- Verified - cluster successfully upgrades k8s version; as expected
Scenario 3 -
- Using same env + cluster as Scenario 2, add an additional worker node to the cluster by running the following
main.tf
file
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.4.2"
}
}
}
provider "rke" {
# Configuration options
}
resource "rke_cluster" "rke_cluster" {
kubernetes_version = "v1.26.4-rancher2-1"
enable_cri_dockerd = true
nodes {
address = "<REDACTED>"
user = "ubuntu"
role = ["etcd", "controlplane", "worker"]
ssh_key = file("<REDACTED>")
}
nodes {
address = "<REDACTED>"
user = "ubuntu"
role = ["worker"]
ssh_key = file("<REDACTED>")
}
services {
kube_api {
pod_security_policy = false
}
}
}