ovh/terraform-provider-ovh

[BUG] desired_nodes is not propageted to the nodepool

wilfriedroset opened this issue · 0 comments

Describe the bug

A clear and concise description of what the bug is.

Terraform Version

❯ terraform -v
Terraform v1.6.2
on darwin_amd64

OVH Terraform Provider Version

❯ terraform init

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of ovh/ovh from the dependency lock file
- Using previously-installed ovh/ovh v0.34.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Affected Resource(s)

Please list the resources as a list, for example:

  • ovh_cloud_project_kube_nodepool

Terraform Configuration Files

terraform {
  required_providers {
    ovh = {
      source  = "ovh/ovh"
      version = "0.34.0"
    }
  }
  required_version = ">= 1.6.2"
}

resource "ovh_cloud_project_kube_nodepool" "node_pool" {
  service_name  = "[redacted]"
  kube_id       = "[redacted]"
  name          = "my-pool-1"
  flavor_name   = "b2-7"
  desired_nodes = 1
}

Expected Behavior

The nodepool is created with the correct desired_nodes

Actual Behavior

The nodepool is created with desired_nodes equal to 0.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan
❯ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  + create

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be created
  + resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      + anti_affinity    = (known after apply)
      + autoscale        = (known after apply)
      + available_nodes  = (known after apply)
      + created_at       = (known after apply)
      + current_nodes    = (known after apply)
      + desired_nodes    = 1 <-- here we see that we have define 1
      + flavor           = (known after apply)
      + flavor_name      = "b2-7"
      + id               = (known after apply)
      + kube_id          = "[redacted]"
      + max_nodes        = (known after apply)
      + min_nodes        = (known after apply)
      + monthly_billed   = (known after apply)
      + name             = "my-pool-1"
      + project_id       = (known after apply)
      + service_name     = "[redacted]"
      + size_status      = (known after apply)
      + status           = (known after apply)
      + up_to_date_nodes = (known after apply)
      + updated_at       = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions
if you run "terraform apply" now.
  1. terraform apply
❯ time terraform apply --auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be created
  + resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      + anti_affinity    = (known after apply)
      + autoscale        = (known after apply)
      + available_nodes  = (known after apply)
      + created_at       = (known after apply)
      + current_nodes    = (known after apply)
      + desired_nodes    = 1
      + flavor           = (known after apply)
      + flavor_name      = "b2-7"
      + id               = (known after apply)
      + kube_id          = "[redacted]"
      + max_nodes        = (known after apply)
      + min_nodes        = (known after apply)
      + monthly_billed   = (known after apply)
      + name             = "my-pool-1"
      + project_id       = (known after apply)
      + service_name     = "[redacted]"
      + size_status      = (known after apply)
      + status           = (known after apply)
      + up_to_date_nodes = (known after apply)
      + updated_at       = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Creating...
ovh_cloud_project_kube_nodepool.node_pool: Still creating... [10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool: Creation complete after 15s [id=0e68561b-4c8b-42f7-b27b-5d9c41580d6d]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
terraform apply --auto-approve  0.46s user 0.21s system 4% cpu 16.181 total <-- this is unusual as from my experience compute nodes does not spawn that fast
  1. Confirm that the nodepool is not created correctly with kubectl get nodepool
❯ k get nodepool
NAME                  FLAVOR   AUTOSCALED   MONTHLYBILLED   ANTIAFFINITY   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   MIN   MAX   AGE
my-pool-1             b2-7     false        false           false          0         0         0            0           0     0     2m17s

We can see that everything is 0.

  1. terraform tries to fix this endlessly
❯ time terraform apply --auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  + create

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be created
  + resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      + anti_affinity    = (known after apply)
      + autoscale        = (known after apply)
      + available_nodes  = (known after apply)
      + created_at       = (known after apply)
      + current_nodes    = (known after apply)
      + desired_nodes    = 1
      + flavor           = (known after apply)
      + flavor_name      = "b2-7"
      + id               = (known after apply)
      + kube_id          = "[redacted]"
      + max_nodes        = (known after apply)
      + min_nodes        = (known after apply)
      + monthly_billed   = (known after apply)
      + name             = "my-pool-1"
      + project_id       = (known after apply)
      + service_name     = "[redacted]"
      + size_status      = (known after apply)
      + status           = (known after apply)
      + up_to_date_nodes = (known after apply)
      + updated_at       = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Creating...
ovh_cloud_project_kube_nodepool.node_pool: Creation complete after 8s [id=182ab6cf-075a-4063-b64b-662d1649c245]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
terraform apply --auto-approve  0.45s user 0.16s system 6% cpu 10.069 total
❯ terraform apply --auto-approve
ovh_cloud_project_kube_nodepool.node_pool: Refreshing state... [id=182ab6cf-075a-4063-b64b-662d1649c245]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be updated in-place
  ~ resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      ~ desired_nodes    = 0 -> 1
        id               = "182ab6cf-075a-4063-b64b-662d1649c245"
        name             = "my-pool-1"
        # (17 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Modifying... [id=182ab6cf-075a-4063-b64b-662d1649c245]
ovh_cloud_project_kube_nodepool.node_pool: Modifications complete after 5s [id=182ab6cf-075a-4063-b64b-662d1649c245]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
❯ terraform apply --auto-approve
ovh_cloud_project_kube_nodepool.node_pool: Refreshing state... [id=182ab6cf-075a-4063-b64b-662d1649c245]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be updated in-place
  ~ resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      ~ desired_nodes    = 0 -> 1
        id               = "182ab6cf-075a-4063-b64b-662d1649c245"
        name             = "my-pool-1"
        # (17 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Modifying... [id=182ab6cf-075a-4063-b64b-662d1649c245]
ovh_cloud_project_kube_nodepool.node_pool: Modifications complete after 5s [id=182ab6cf-075a-4063-b64b-662d1649c245]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

References

This might be related to the following PR

Additional context

The bug seems to appear in 0.29.0 version as I've successfully created the nodepool with 0.29.0

❯ terraform destroy --auto-approve
ovh_cloud_project_kube_nodepool.node_pool: Refreshing state... [id=0e68561b-4c8b-42f7-b27b-5d9c41580d6d]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # ovh_cloud_project_kube_nodepool.node_pool will be destroyed
  - resource "ovh_cloud_project_kube_nodepool" "node_pool" {
      - anti_affinity    = false -> null
      - autoscale        = false -> null
      - available_nodes  = 0 -> null
      - created_at       = "2023-11-01T21:53:27Z" -> null
      - current_nodes    = 0 -> null
      - desired_nodes    = 0 -> null
      - flavor           = "b2-7" -> null
      - flavor_name      = "b2-7" -> null
      - id               = "0e68561b-4c8b-42f7-b27b-5d9c41580d6d" -> null
      - kube_id          = "[redacted]" -> null
      - max_nodes        = 0 -> null
      - min_nodes        = 0 -> null
      - monthly_billed   = false -> null
      - name             = "my-pool-1" -> null
      - project_id       = "[redacted]" -> null
      - service_name     = "[redacted]" -> null
      - size_status      = "CAPACITY_OK" -> null
      - status           = "READY" -> null
      - up_to_date_nodes = 0 -> null
      - updated_at       = "2023-11-01T21:53:38Z" -> null
    }

Plan: 0 to add, 0 to change, 1 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Destroying... [id=0e68561b-4c8b-42f7-b27b-5d9c41580d6d]
ovh_cloud_project_kube_nodepool.node_pool: Still destroying... [id=0e68561b-4c8b-42f7-b27b-5d9c41580d6d, 10s elapsed]
ovh_cloud_project_kube_nodepool.node_pool: Destruction complete after 15s

Destroy complete! Resources: 1 destroyed.
❯ terraform init -upgrade

Initializing the backend...

Initializing provider plugins...
- Finding ovh/ovh versions matching "0.28.1"... <-- downgrading the provider
- Installing ovh/ovh v0.28.1...
- Installed ovh/ovh v0.28.1 (signed by a HashiCorp partner, key ID F56D1A6CBDAAADA5)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

❯ time terraform apply --auto-approve

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:

  • create

Terraform will perform the following actions:

ovh_cloud_project_kube_nodepool.node_pool will be created

  • resource "ovh_cloud_project_kube_nodepool" "node_pool" {
    • anti_affinity = (known after apply)
    • autoscale = (known after apply)
    • available_nodes = (known after apply)
    • created_at = (known after apply)
    • current_nodes = (known after apply)
    • desired_nodes = 1
    • flavor = (known after apply)
    • flavor_name = "b2-7"
    • id = (known after apply)
    • kube_id = "[redacted]"
    • max_nodes = (known after apply)
    • min_nodes = (known after apply)
    • monthly_billed = (known after apply)
    • name = "my-pool-1"
    • project_id = (known after apply)
    • service_name = "[redacted]"
    • size_status = (known after apply)
    • status = (known after apply)
    • up_to_date_nodes = (known after apply)
    • updated_at = (known after apply)
      }

Plan: 1 to add, 0 to change, 0 to destroy.
ovh_cloud_project_kube_nodepool.node_pool: Creating...
--8<--
ovh_cloud_project_kube_nodepool.node_pool: Creation complete after 4m55s [id=402e3ea4-ee0f-44c1-b42a-5692be09476c]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
terraform apply --auto-approve 1.02s user 0.37s system 0% cpu 4:56.01 total <-- this take longer

Everything is in order with `kubectl get nodepool`

❯ kubectl get nodepool
NAME FLAVOR AUTOSCALED MONTHLYBILLED ANTIAFFINITY DESIRED CURRENT UP-TO-DATE AVAILABLE MIN MAX AGE
my-pool-1 b2-7 false false false 1 1 1 0 0 100 5m21s