mumoshu/terraform-provider-eksctl

Error refreshing cluster state

Opened this issue · 3 comments

rhart commented

After successfully creating a cluster, then deleting the cluster outside of Terraform, and then running a terraform plan, I get the following error:

�Terraform v1.0.0
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
eksctl_cluster.foo: Refreshing state... [id=c35krkllsbosck5knapg]
╷
│ Error: unhandled error: runtime error: invalid memory address or nil pointer dereference
│ goroutine 27 [running]:
│ runtime/debug.Stack(0xc0002af390, 0x13218a0, 0x202bca0)
│ 	runtime/debug/stack.go:24 +0x9f
│ github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster.ResourceCluster.func5.1(0xc0002af8b0)
│ 	github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster/resource.go:108 +0x5b
│ panic(0x13218a0, 0x202bca0)
│ 	runtime/panic.go:965 +0x1b9
│ github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster.doWriteKubeconfig(0xc0002af7d8, 0x183dea8, 0xc0004e75e0, 0xc00028fdd0, 0x15, 0xc00029f6d6, 0x9, 0x8, 0x203000)
│ 	github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster/cluster_create.go:113 +0xb7a
│ github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster.(*Manager).readCluster(0xc000125d28, 0x183dea8, 0xc0004e75e0, 0x1, 0xc0004e75e0, 0xc0002b3888)
│ 	github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster/cluster_read.go:41 +0x47a
│ github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster.ResourceCluster.func5(0xc0004e75e0, 0x126bc60, 0xc000122878, 0x0, 0x0)
│ 	github.com/mumoshu/terraform-provider-eksctl/pkg/resource/cluster/resource.go:112 +0x7a
│ github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc0000fa400, 0xc0002a5130, 0x126bc60, 0xc000122878, 0xc000122dc8, 0x0, 0x0)
│ 	github.com/hashicorp/terraform-plugin-sdk@v1.0.0/helper/schema/resource.go:455 +0x12e
│ github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ReadResource(0xc0001220b8, 0x1845600, 0xc000436000, 0xc0002a4f50, 0xc0001220b8, 0xc000436000, 0xc00020fa50)
│ 	github.com/hashicorp/terraform-plugin-sdk@v1.0.0/internal/helper/plugin/grpc_provider.go:525 +0x3dd
│ github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ReadResource_Handler(0x146ee00, 0xc0001220b8, 0x1845600, 0xc000436000, 0xc000287f80, 0x0, 0x1845600, 0xc000436000, 0xc0004ab500, 0x14b5)
│ 	github.com/hashicorp/terraform-plugin-sdk@v1.0.0/internal/tfplugin5/tfplugin5.pb.go:3153 +0x214
│ google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000b0c60, 0x1850bd8, 0xc000513200, 0xc0002b4a00, 0xc0000ecc60, 0x2038250, 0x0, 0x0, 0x0)
│ 	google.golang.org/grpc@v1.23.0/server.go:995 +0x482
│ google.golang.org/grpc.(*Server).handleStream(0xc0000b0c60, 0x1850bd8, 0xc000513200, 0xc0002b4a00, 0x0)
│ 	google.golang.org/grpc@v1.23.0/server.go:1275 +0xd2c
│ google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00050c1b0, 0xc0000b0c60, 0x1850bd8, 0xc000513200, 0xc0002b4a00)
│ 	google.golang.org/grpc@v1.23.0/server.go:710 +0xab
│ created by google.golang.org/grpc.(*Server).serveStreams.func1
│ 	google.golang.org/grpc@v1.23.0/server.go:708 +0xa5
│ 
│ 
│   with eksctl_cluster.foo,
│   on main.tf line 1, in resource "eksctl_cluster" "foo":
│    1: resource "eksctl_cluster" "foo" {

I was expecting that the provider would detect the cluster was deleted and then recreate it. Do I have the wrong expectations?

rhart commented

I'm not great at reading Go, but I think it's due to trying to run eksctl utils write-kubeconfig --cluster=<name> and the cluster does not exist. Is it possible in that case to mark the resource as needs creating again?

I faced exactly the same issue, in my case i was assigning kubeconfig_path in resource "eksctl_cluster" to predefined path, as it was creating new path every time if i run terraform plan.
It has broken my terraform state, but I restored it from old version of terraform state.

At last I have given up the idea to set kubeconfig_path to restore back to original position.

I'm having exact same issue, any resolution @rhart ?