Volume are not deleted in AWS
shinji62 opened this issue · 4 comments
Hello,
I am using the ebs provisionner and one I try to delete volume the external-provider is panicking, so not sure where I should look for but ebs folk told me to open an issue in this repos as well
kubernetes-sigs/aws-ebs-csi-driver#1301
0713 04:12:57.216917 1 controller.go:1471] delete "pvc-27f008e4-9851-469a-9585-8667f6d1c28e": started
E0713 04:12:57.216974 1 controller.go:1481] delete "pvc-27f008e4-9851-469a-9585-8667f6d1c28e": volume deletion failed: persistentvolume pvc-27f008e4-9851-469a-9585-8667f6d1c28e is still attached to node ip-10-19-171-23.ap-northeast-1.compute.internal
W0713 04:12:57.217003 1 controller.go:989] Retrying syncing volume "pvc-27f008e4-9851-469a-9585-8667f6d1c28e", failure 0
E0713 04:12:57.217028 1 controller.go:1007] error syncing volume "pvc-27f008e4-9851-469a-9585-8667f6d1c28e": persistentvolume pvc-27f008e4-9851-469a-9585-8667f6d1c28e is still attached to node ip-10-19-171-23.ap-northeast-1.compute.internal
I0713 04:12:57.217098 1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-27f008e4-9851-469a-9585-8667f6d1c28e", UID:"75cc83a7-c1e2-4358-920a-67d4f216ef66", APIVersion:"v1", ResourceVersion:"9958087", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' persistentvolume pvc-27f008e4-9851-469a-9585-8667f6d1c28e is still attached to node ip-10-19-171-23.ap-northeast-1.compute.internal
I0713 04:12:57.227009 1 controller.go:1471] delete "pvc-27f008e4-9851-469a-9585-8667f6d1c28e": started
E0713 04:12:57.227125 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 10485 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1780320, 0x2976450})
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f3a0f0})
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x1780320, 0x2976450})
/go/pkg/csiprow.XXXXcGaFcn/go-1.17.3/src/runtime/panic.go:1038 +0x215
github.com/kubernetes-csi/external-provisioner/pkg/controller.(*csiProvisioner).Delete(0xc000288280, {0x1c09090, 0xc000698280}, 0xc000d3e780)
/workspace/pkg/controller/controller.go:1177 +0x4e2
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).deleteVolumeOperation(0xc0002ae780, {0x1c09090, 0xc000698280}, 0xc000d3e780)
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1473 +0x175
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).syncVolume(0xc0005b7cd0, {0x1c09090, 0xc000698280}, {0x1998940, 0xc000d3e780})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1109 +0xe7
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).syncVolumeHandler(0xc0002ae780, {0x1c09090, 0xc000698280}, {0xc000de7500, 0x40edaf})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1045 +0x69
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc0002ae780, 0xc00032ce10, {0x16bd780, 0xc000f3a0f0})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:987 +0x1b0
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0002ae780, {0x1c09090, 0xc000698280})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1004 +0x59
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).runVolumeWorker(...)
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:905
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).Run.func1.3()
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:857 +0x45
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fc444481f40)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1bd8120, 0xc00032ce10}, 0x1, 0xc00053c5a0)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x65, 0x2000)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x40, 0x3f, 0xc000916010)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).Run.func1
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:857 +0xa78
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x159a602]
goroutine 10485 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000f3a0f0})
/workspace/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x1780320, 0x2976450})
/go/pkg/csiprow.XXXXcGaFcn/go-1.17.3/src/runtime/panic.go:1038 +0x215
github.com/kubernetes-csi/external-provisioner/pkg/controller.(*csiProvisioner).Delete(0xc000288280, {0x1c09090, 0xc000698280}, 0xc000d3e780)
/workspace/pkg/controller/controller.go:1177 +0x4e2
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).deleteVolumeOperation(0xc0002ae780, {0x1c09090, 0xc000698280}, 0xc000d3e780)
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1473 +0x175
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).syncVolume(0xc0005b7cd0, {0x1c09090, 0xc000698280}, {0x1998940, 0xc000d3e780})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1109 +0xe7
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).syncVolumeHandler(0xc0002ae780, {0x1c09090, 0xc000698280}, {0xc000de7500, 0x40edaf})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1045 +0x69
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).processNextVolumeWorkItem.func1(0xc0002ae780, 0xc00032ce10, {0x16bd780, 0xc000f3a0f0})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:987 +0x1b0
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0002ae780, {0x1c09090, 0xc000698280})
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:1004 +0x59
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).runVolumeWorker(...)
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:905
sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).Run.func1.3()
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:857 +0x45
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fc444481f40)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1bd8120, 0xc00032ce10}, 0x1, 0xc00053c5a0)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x65, 0x2000)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x40, 0x3f, 0xc000916010)
/workspace/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller.(*ProvisionController).Run.func1
/workspace/vendor/sigs.k8s.io/sig-storage-lib-external-provisioner/v8/controller/controller.go:857 +0xa78
Knowing the exact version number of the external-provisioner would be a good start. Then we can look at the line where the nil pointer access happens (pkg/controller/controller.go:1177
).
If it is old, perhaps try a recent one that is still supported?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale