kubernetes-csi/external-provisioner

Delete - invalid memory address or nil pointer dereference

NicklasWallgren opened this issue · 2 comments

What happened:
Invalid memory address or nil pointer dereference

I1216 10:21:40.551942       1 controller.go:1472] delete "pvc-d580499e-43cb-413a-a87b-4b8f3b1662cf": started
E1216 10:21:40.552055       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 255 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1776c20, 0x25e29d0)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89
panic(0x1776c20, 0x25e29d0)
	/go/pkg/csiprow.XXXXJBiJKk/go-1.15/src/runtime/panic.go:969 +0x175
github.com/kubernetes-csi/external-provisioner/pkg/controller.(*csiProvisioner).Delete(0xc000732000, 0x1ba5be0, 0xc0004cc6c0, 0xc0004d2000, 0x0, 0x0)
	/workspace/pkg/controller/controller.go:1146 +0x522
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).deleteVolumeOperation(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0xc0004d2000, 0xc0005afc01, 0xc0007512c0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1474 +0x206
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncVolume(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0x195ab40, 0xc0004d2000, 0xc000a50a01, 0x0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1128 +0xde
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncVolumeHandler(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0xc0005b4570, 0x28, 0xc000700258, 0xc000b24090)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1069 +0x8c
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc000457180, 0xc0005afe18, 0x1702c00, 0xc00086e620, 0x0, 0x0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1011 +0x125
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0x1)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1028 +0x71
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(0xc000457180, 0x1ba5be0, 0xc0004cc6c0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:929 +0x3f
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:881 +0x3c
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007d4000)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007d4000, 0x1b67260, 0xc00084e5d0, 0x1, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007d4000, 0x3b9aca00, 0x0, 0x1, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0007d4000, 0x3b9aca00, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:881 +0x3d6
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x15ef422]

goroutine 255 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x1776c20, 0x25e29d0)
	/go/pkg/csiprow.XXXXJBiJKk/go-1.15/src/runtime/panic.go:969 +0x175
github.com/kubernetes-csi/external-provisioner/pkg/controller.(*csiProvisioner).Delete(0xc000732000, 0x1ba5be0, 0xc0004cc6c0, 0xc0004d2000, 0x0, 0x0)
	/workspace/pkg/controller/controller.go:1146 +0x522
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).deleteVolumeOperation(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0xc0004d2000, 0xc0005afc01, 0xc0007512c0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1474 +0x206
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncVolume(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0x195ab40, 0xc0004d2000, 0xc000a50a01, 0x0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1128 +0xde
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncVolumeHandler(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0xc0005b4570, 0x28, 0xc000700258, 0xc000b24090)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1069 +0x8c
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc000457180, 0xc0005afe18, 0x1702c00, 0xc00086e620, 0x0, 0x0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1011 +0x125
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000457180, 0x1ba5be0, 0xc0004cc6c0, 0x1)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:1028 +0x71
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(0xc000457180, 0x1ba5be0, 0xc0004cc6c0)
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:929 +0x3f
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:881 +0x3c
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0007d4000)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007d4000, 0x1b67260, 0xc00084e5d0, 0x1, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007d4000, 0x3b9aca00, 0x0, 0x1, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc0007d4000, 0x3b9aca00, 0xc0000a5140)
	/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
	/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller/controller.go:881 +0x3d6

What you expected to happen:
Delete deleteVolumeOperation should be executed successfully without panic.

How to reproduce it:

Anything else we need to know?:
We are using Longhorn version 1.3.1

Which is using 2.1.2. https://github.com/longhorn/longhorn/blob/v1.3.1/deploy/longhorn-images.txt#L2

See discussion here, longhorn/longhorn#4061 (comment)

Environment:

  • Driver version:
  • Kubernetes version (use kubectl version): v1.21.2
  • OS (e.g. from /etc/os-release): Ubuntu 18.04
  • Kernel (e.g. uname -a): Linux UbuntuNode01 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: k3s, longhorn

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

msau42 commented

Line in question:
https://github.com/kubernetes-csi/external-provisioner/blob/v2.1.2/pkg/controller/controller.go#L1146

Looks like this was fixed in v3.4.0: #796. Please upgrade and see if you still hit the issue.