outscale/terraform-provider-outscale

Error: Unable to find LoadBalancer

naicoram01 opened this issue · 6 comments

Providers 0.7.0 (even with 0.5.3)

The problem is en error issued when executing terraform. the error is the following :
"Error: Unable to find LoadBalancer" when activating the debug for terraform I don't have more detail about the problem.

thank you so much for your help.

Best regards,
Mohammed SETTAF

I will add this warning from the debugging if if can help to identify the source of the problem...
2022-12-08T14:51:28.937+0100 [WARN] Provider "registry.terraform.io/outscale-dev/outscale" produced an invalid plan for outscale_security_group_rule.rule4_sgfrontaxwaygtw80xxlb, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .inbound_rules: block count in plan (5) disagrees with count in config (0)
- .outbound_rules: block count in plan (1) disagrees with count in config (0)

Hi @naicoram01,

Can you give us the terraform file to help us investigate your issue ?

Best regards ^^

Hi !

Just got the same error.
It works very well last week but today, surprisly, it doesn't work and got the Unable to find LoadBalancer.

Here is the debug output:

╷
│ Error: Unable to find LoadBalancer
│ 
│   with module.vpc_classic.outscale_load_balancer.load_balancer001[0],
│   on vpc_classic/lb.tf line 1, in resource "outscale_load_balancer" "load_balancer001":
│    1: resource "outscale_load_balancer" "load_balancer001" {
│ 
╵
2023-01-16T11:21:26.245+0100 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = error reading from server: EOF"
2023-01-16T11:21:26.247+0100 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/outscale-dev/outscale/0.7.0/darwin_arm64/terraform-provider-outscale_v0.7.0 pid=15157
2023-01-16T11:21:26.247+0100 [DEBUG] provider: plugin exited

Here the terraform file:

resource "outscale_load_balancer" "load_balancer001" {
  count              = var.balanced ? 1 : 0
  load_balancer_name = "LB-TEST"
  subregion_names    = ["${var.region}a"]
  listeners {
      backend_port           = 443
      backend_protocol       = "HTTPS"
      load_balancer_protocol = "HTTPS"
      load_balancer_port     = 443
      server_certificate_id  = outscale_server_certificate.server_certificate_001[count.index].id
  }
  tags {
      key   = "name"
      value = "LB-TEST"
  }
}

resource "outscale_load_balancer_vms" "outscale_load_balancer_vms001" {
  count              = var.balanced ? 1 : 0
  load_balancer_name = "LB-TEST"
  backend_vm_ids     = [<vm_ids>]
}

resource "outscale_server_certificate" "server_certificate_001" {
  count       =  var.balanced ? 1 : 0
  name        =  "Certs-TEST"
  body        =  var.body
  chain       =  var.chain
  private_key =  var.private_key
}

Thanks for the help,
Best Regards.

Hi @rchallie,

Thanks for your feedback.
We will investigate on that.

Best regards ^^

Hello,

last time, I have found the origin of the problem... I hadd again the same problem; and now I can't found the source of the error.

I can't send all of the configuration because it is more than 5000 lines ...

Here the warning I get from debug :

2023-04-25T09:34:54.361+0200 [WARN] Provider "registry.terraform.io/outscale-dev/outscale" produced an invalid plan for outscale_load_balancer_attributes.lbfrontiam443to8443aza_attributes, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .access_log: block count in plan (1) disagrees with count in config (0)
- .health_check: block count in plan (1) disagrees with count in config (0)
- .tags: block count in plan (1) disagrees with count in config (0)
....
2023-04-25T09:34:54.346+0200 [WARN] Provider "registry.terraform.io/outscale-dev/outscale" produced an unexpected new value for outscale_load_balancer_attributes.lbfrontiam443to8443aza_attributes during refresh.
- .backend_vm_ids[0]: was cty.StringVal("i-67fb772b"), but now cty.StringVal("i-bca93599")
- .backend_vm_ids[1]: was cty.StringVal("i-bca93599"), but now cty.StringVal("i-bc896633")
- .backend_vm_ids[2]: was cty.StringVal("i-bd2f5d9c"), but now cty.StringVal("i-6a6c39f0")
- .backend_vm_ids[3]: was cty.StringVal("i-6a6c39f0"), but now cty.StringVal("i-3d0455ec")
- .backend_vm_ids[5]: was cty.StringVal("i-3d0455ec"), but now cty.StringVal("i-bd2f5d9c")
- .backend_vm_ids[6]: was cty.StringVal("i-bc896633"), but now cty.StringVal("i-67fb772b")

==============================================================================

the related conf is as below :

[=== IAM-443-TO-8443-LB ===]

resource "outscale_load_balancer" "lbfrontiam443to8443aza" {
load_balancer_name = "LB-FRONT-IAM-443-TO-8443-AZa"
listeners {
backend_port = 8443
backend_protocol = "HTTPS"
load_balancer_protocol = "HTTPS"
load_balancer_port = 443
server_certificate_id = "arn:aws:iam::848926096090:server-certificate/Cockpit/VPC-FRONT-iamservices"
}
subnets = [outscale_subnet.subfrontlbaza.subnet_id]
security_groups = [outscale_security_group.sgfrontiam443to8443lb.security_group_id]
load_balancer_type = "internal"
tags {
key = "name"
value = "LB-FRONT-IAM-443-TO-8443"
}
#depends_on = [outscale_load_balancer_policy.load_balancer_policyIAM]
}
resource "outscale_load_balancer_policy" "lbfrontiam443to8443aza_policy" {
load_balancer_name = outscale_load_balancer.lbfrontiam443to8443aza.load_balancer_name
policy_name = "PSN-959-AM-LB-cookie"
policy_type = "app"
cookie_name = "buxula_ha01"
depends_on = [outscale_load_balancer.lbfrontiam443to8443aza]
}

resource "outscale_load_balancer_attributes" "lbfrontiam443to8443aza_attributes" {
load_balancer_name = outscale_load_balancer.lbfrontiam443to8443aza.load_balancer_name
load_balancer_port = 443
policy_names = [outscale_load_balancer_policy.lbfrontiam443to8443aza_policy.policy_name]
depends_on = [outscale_load_balancer_policy.lbfrontiam443to8443aza_policy]
}

Thank you so much for your help.

Hello @naicoram01,

Thanks for reaching us,

  • If you get exactly Unable to find LoadBalancer or Unable to find "resource_name", the problem is the dependencies between the load_balancer data source or the "resource_name" data source and the resource.
    You can use depends_on to avoid this error.
resource "outscale_load_balancer" "load_balancer_test" {
  load_balancer_name = "terraform-public-load-balancer"
  subregion_names    = ["${var.region}a"]
  listeners {
      backend_port           = 8080
      backend_protocol       = "HTTP"
      load_balancer_protocol = "HTTP"
      load_balancer_port     = 8080
  }
  tags {
      key   = "name"
      value = "terraform-public-load-balancer"
  }
}

## This option
data "outscale_load_balancer" "load_balancer01" {
  filter {
      name   = "load_balancer_names"
      values = ["terraform-public-load-balancer"]
  }
 depends_on = [outscale_load_balancer.load_balancer_test]
}
## OR this one 
data "outscale_load_balancer" "load_balancer01" {
  filter {
      name   = "load_balancer_names"
      values = [outscale_load_balancer.load_balancer_test.load_balancer_name]
  }
}
  • If you have received another error message, please open a new issues.

Best regards,