kubernetes-csi/external-provisioner

CSIDriver ControllerUnpublishVolume is not called before Delete

pierre-emmanuelJ opened this issue · 6 comments

What happened:

I'm developing a CSI driver on k8s 1.27

When I create a PVC and a deployment app with this PVC, PVC is bound, and app is running, Controller created volume, attached it to the node, node did its jobs to publish it also...etc
The issue I have is on the deletion path.

When I delete the deployment app, all is good, no error log in the plugin and all the side-cars containers.

It's when I delete the PVC.

csi-provisioner logs:

I1031 11:20:56.552314       1 controller.go:1152] handleProtectionFinalizer Volume : &PersistentVolume{ObjectMeta:{pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe    87521e1d-4f6f-4060-976e-50cb1be457e2 1404661 0 2023-10-30 15:36:02 +0000 UTC <nil> <nil> map[] map[pv.kubernetes.io/provisioned-by:csi.exoscale.com volume.kubernetes.io/provisioner-deletion-secret-name: volume.kubernetes.io/provisioner-deletion-secret-namespace:] [] [kubernetes.io/pv-protection external-attacher/csi-exoscale-com] [{csi-provisioner Update v1 2023-10-30 15:36:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:pv.kubernetes.io/provisioned-by":{},"f:volume.kubernetes.io/provisioner-deletion-secret-name":{},"f:volume.kubernetes.io/provisioner-deletion-secret-namespace":{}}},"f:spec":{"f:accessModes":{},"f:capacity":{".":{},"f:storage":{}},"f:claimRef":{".":{},"f:apiVersion":{},"f:kind":{},"f:name":{},"f:namespace":{},"f:resourceVersion":{},"f:uid":{}},"f:csi":{".":{},"f:driver":{},"f:fsType":{},"f:volumeAttributes":{".":{},"f:storage.kubernetes.io/csiProvisionerIdentity":{}},"f:volumeHandle":{}},"f:nodeAffinity":{".":{},"f:required":{}},"f:persistentVolumeReclaimPolicy":{},"f:storageClassName":{},"f:volumeMode":{}}} } {csi-attacher Update v1 2023-10-30 15:36:28 +0000 UTC FieldsV1 {"f:metadata":{"f:finalizers":{"v:\"external-attacher/csi-exoscale-com\"":{}}}} } {kube-controller-manager Update v1 2023-10-31 08:27:48 +0000 UTC FieldsV1 {"f:status":{"f:phase":{}}} status}]},Spec:PersistentVolumeSpec{Capacity:ResourceList{storage: {{214748364800 0} {<nil>}  BinarySI},},PersistentVolumeSource:PersistentVolumeSource{GCEPersistentDisk:nil,AWSElasticBlockStore:nil,HostPath:nil,Glusterfs:nil,NFS:nil,RBD:nil,ISCSI:nil,Cinder:nil,CephFS:nil,FC:nil,Flocker:nil,FlexVolume:nil,AzureFile:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Local:nil,StorageOS:nil,CSI:&CSIPersistentVolumeSource{Driver:csi.exoscale.com,VolumeHandle:at-vie-1/066db7c3-721b-4768-99e4-f5d952a2a4d7,ReadOnly:false,FSType:ext4,VolumeAttributes:map[string]string{storage.kubernetes.io/csiProvisionerIdentity: 1698680094358-472-csi.exoscale.com,},ControllerPublishSecretRef:nil,NodeStageSecretRef:nil,NodePublishSecretRef:nil,ControllerExpandSecretRef:nil,NodeExpandSecretRef:nil,},},AccessModes:[ReadWriteOnce],ClaimRef:&ObjectReference{Kind:PersistentVolumeClaim,Namespace:default,Name:my-exo-pvc,UID:94593123-d3de-4c5c-bc1e-a4b893a1adbe,APIVersion:v1,ResourceVersion:1222612,FieldPath:,},PersistentVolumeReclaimPolicy:Delete,StorageClassName:exoscale-sbs,MountOptions:[],VolumeMode:*Filesystem,NodeAffinity:&VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[]NodeSelectorTerm{NodeSelectorTerm{MatchExpressions:[]NodeSelectorRequirement{NodeSelectorRequirement{Key:topology.csi.exoscale.com/zone,Operator:In,Values:[at-vie-1],},},MatchFields:[]NodeSelectorRequirement{},},},},},},Status:PersistentVolumeStatus{Phase:Released,Message:,Reason:,LastPhaseTransitionTime:<nil>,},}
I1031 11:20:56.552556       1 controller.go:1239] shouldDelete volume "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe"
I1031 11:20:56.552564       1 controller.go:1269] shouldDelete volume "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe" is true
I1031 11:20:56.552569       1 controller.go:1113] shouldDelete Volume: "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe"
I1031 11:20:56.552576       1 controller.go:1509] delete "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe": started
I1031 11:20:56.552732       1 controller.go:1279] volume pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe does not need any deletion secrets
E1031 11:20:56.552802       1 controller.go:1519] delete "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe": volume deletion failed: persistentvolume pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe is still attached to node k8s-node-1
W1031 11:20:56.552884       1 controller.go:989] Retrying syncing volume "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe", failure 38
E1031 11:20:56.552907       1 controller.go:1007] error syncing volume "pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe": persistentvolume pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe is still attached to node k8s-node-1
I1031 11:20:56.553447       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolume", Namespace:"", Name:"pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe", UID:"87521e1d-4f6f-4060-976e-50cb1be457e2", APIVersion:"v1", ResourceVersion:"1404661", FieldPath:""}): type: 'Warning' reason: 'VolumeFailedDelete' persistentvolume pvc-94593123-d3de-4c5c-bc1e-a4b893a1adbe is still attached to node k8s-node-1

What you expected to happen:

What I not understand is why the csi-provisioner not trying to call ControllerUnpublishVolume before trying to delete it?

Because here the csi-provisioner try to delete the volume and the volume is still attached to k8s-node-1

We end-up here:
https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L1254

You can see in my plugin implementation ControllerUnpublishVolume is never called by the provisioner:

I1031 09:22:09.124730       1 driver.go:63] driver: csi.exoscale.com version: v0.0.1
I1031 09:22:09.139906       1 main.go:50] NewDriver OK
I1031 09:22:09.139924       1 driver.go:113] Removing existing socket if existing
I1031 09:22:09.140252       1 driver.go:170] CSI server started on unix:///var/lib/csi/sockets/pluginproxy/csi.sock
I1031 09:22:09.211490       1 identity.go:18] GetPluginInfo called
I1031 09:22:09.212153       1 identity.go:50] GetPluginCapabilities called
I1031 09:22:09.213320       1 controller.go:360] ControllerGetCapabilities
I1031 09:22:09.276583       1 identity.go:18] GetPluginInfo called
I1031 09:22:09.277584       1 identity.go:50] GetPluginCapabilities called
I1031 09:22:09.279136       1 controller.go:360] ControllerGetCapabilities
I1031 09:22:09.368168       1 identity.go:18] GetPluginInfo called
I1031 09:22:09.370209       1 controller.go:360] ControllerGetCapabilities
I1031 09:22:09.535828       1 identity.go:18] GetPluginInfo called
I1031 09:22:09.540682       1 identity.go:50] GetPluginCapabilities called
I1031 09:22:09.542205       1 controller.go:360] ControllerGetCapabilities
I1031 09:22:09.542733       1 controller.go:360] ControllerGetCapabilities
I1031 09:22:09.557462       1 identity.go:18] GetPluginInfo called
I1031 09:32:47.994324       1 controller.go:193] ControllerPublishVolume
I1031 09:32:51.539137       1 controller.go:193] ControllerPublishVolume

Anything else we need to know?:

Here is my Controller/Volume capabilities:

var (
    // controllerCapabilities represents the capabilites of the Exoscale Block Volumes
    controllerCapabilities = []csi.ControllerServiceCapability_RPC_Type{
        // This capability indicates the driver supports dynamic volume provisioning and deleting.
        csi.ControllerServiceCapability_RPC_CREATE_DELETE_VOLUME,
        // This capability indicates the driver implements ControllerPublishVolume and ControllerUnpublishVolume.
        // Operations that correspond to the Kubernetes volume attach/detach operations.
        // This may, for example, result in a "volume attach" operation against the
        // Google Cloud control plane to attach the specified volume to the specified node
        // for the Google Cloud PD CSI Driver.
        csi.ControllerServiceCapability_RPC_PUBLISH_UNPUBLISH_VOLUME,
        csi.ControllerServiceCapability_RPC_LIST_VOLUMES,
        // Currently the only way to consume a snapshot is to create
        // a volume from it. Therefore plugins supporting
        // CREATE_DELETE_SNAPSHOT MUST support creating volume from
        // snapshot.
        csi.ControllerServiceCapability_RPC_CREATE_DELETE_SNAPSHOT,
        csi.ControllerServiceCapability_RPC_LIST_SNAPSHOTS,
        csi.ControllerServiceCapability_RPC_EXPAND_VOLUME,

        // TODO add this support.
        //
        // Indicates the SP supports the
        // ListVolumesResponse.entry.published_node_ids field and the
        // ControllerGetVolumeResponse.published_node_ids field.
        // The SP MUST also support PUBLISH_UNPUBLISH_VOLUME.
        //    csi.ControllerServiceCapability_RPC_LIST_VOLUMES_PUBLISHED_NODES,

        // Indicates the SP supports the ControllerGetVolume RPC.
        // This enables COs to, for example, fetch per volume
        // condition after a volume is provisioned.
        csi.ControllerServiceCapability_RPC_GET_VOLUME,
        // Indicates the SP supports the SINGLE_NODE_SINGLE_WRITER and/or
        // SINGLE_NODE_MULTI_WRITER access modes.
        // These access modes are intended to replace the
        // SINGLE_NODE_WRITER access mode to clarify the number of writers
        // for a volume on a single node. Plugins MUST accept and allow
        // use of the SINGLE_NODE_WRITER access mode when either
        // SINGLE_NODE_SINGLE_WRITER and/or SINGLE_NODE_MULTI_WRITER are
        // supported, in order to permit older COs to continue working.
        csi.ControllerServiceCapability_RPC_SINGLE_NODE_MULTI_WRITER,
    }

    // supportedAccessModes represents the supported access modes for the Exoscale Block Volumes
    supportedAccessModes = []csi.VolumeCapability_AccessMode{
        {
            Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
        },
        {
            Mode: csi.VolumeCapability_AccessMode_SINGLE_NODE_MULTI_WRITER,
        },
    }

Environment:

  • Driver version:
    registry.k8s.io/sig-storage/csi-provisioner:v3.6.0
    registry.k8s.io/sig-storage/csi-attacher:v4.4.1
    registry.k8s.io/sig-storage/csi-snapshotter:v6.3.0
    registry.k8s.io/sig-storage/snapshot-controller:v6.3.0
    registry.k8s.io/sig-storage/csi-resizer:v1.9.0
    registry.k8s.io/sig-storage/livenessprobe:v2.11.0
    
  • Kubernetes version (use kubectl version): v1.27.7
  • OS (e.g. from /etc/os-release): Ubuntu 22.04.3 LTS
  • Kernel (e.g. uname -a): Linux k8s-node-1 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

ControllerPublish/Unpublish is called by external-attacher https://github.com/kubernetes-csi/external-attacher, have you verified csi-attacher sidecar logs?

ControllerPublish/Unpublish is called by external-attacher https://github.com/kubernetes-csi/external-attacher, have you verified csi-attacher sidecar logs?

Okay, just retrying by looking the log of the attacher closely:

Finalizer is added

I1102 15:51:59.498178       1 controller.go:210] Started VA processing "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:51:59.498206       1 csi_handler.go:224] CSIHandler: processing VA "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:51:59.498212       1 csi_handler.go:251] Attaching "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:51:59.498224       1 csi_handler.go:421] Starting attach operation for "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:51:59.498306       1 csi_handler.go:341] Adding finalizer to PV "pvc-121b0b5b-3b05-4987-8689-b6aaefc4f2e2"
I1102 15:51:59.508670       1 csi_handler.go:350] PV finalizer added to "pvc-121b0b5b-3b05-4987-8689-b6aaefc4f2e2"
I1102 15:51:59.509704       1 csi_handler.go:740] Found NodeID at-vie-1/a3f85afe-fc52-4f17-9413-1d10188da29c in CSINode k8s-node-2
I1102 15:51:59.509797       1 csi_handler.go:312] VA finalizer added to "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:51:59.509817       1 csi_handler.go:326] NodeID annotation added to "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"

Controller publish success

I1102 15:51:59.516332       1 connection.go:193] GRPC call: /csi.v1.Controller/ControllerPublishVolume
I1102 15:51:59.517056       1 connection.go:194] GRPC request: {"node_id":"at-vie-1/a3f85afe-fc52-4f17-9413-1d10188da29c","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":7}},"volume_context":{"storage.kubernetes.io/csiProvisionerIdentity":"1698939959768-7219-csi.exoscale.com"},"volume_id":"at-vie-1/93894f46-65cf-4d2d-8f7e-7f5dde06c953"}
I1102 15:52:00.497978       1 leaderelection.go:281] successfully renewed lease kube-system/external-attacher-leader-csi-exoscale-com
I1102 15:52:03.137079       1 connection.go:200] GRPC response: {"publish_context":{"csi.exoscale.com/volume-id":"93894f46-65cf-4d2d-8f7e-7f5dde06c953","csi.exoscale.com/volume-name":"pvc-121b0b5b-3b05-4987-8689-b6aaefc4f2e2","csi.exoscale.com/volume-zone":"at-vie-1"}}
I1102 15:52:03.137101       1 connection.go:201] GRPC error: <nil>

It has been mark as attached

Now I delete the pod (app deployment using the PVC)
Nothing happen in logs from the container plugin, attacher and provisioner, no Error, Info (This is the normal behaviour IMO).

Problems comes here:

Then I delete the PVC, there is still the actual error in the provisioner. (is still attached to node)

And on the attacher logs nothing, No ControllerUnpublish request is sent :/

Only these logs Info

I1102 15:56:24.921973       1 reflector.go:378] k8s.io/client-go/informers/factory.go:150: forcing resync
I1102 15:56:24.922093       1 controller.go:210] Started VA processing "csi-e163344c3199f18f0b7b60a59701a18c5eb7b4e96ec18b9f0fba85792a2a91fe"
I1102 15:56:24.922102       1 controller.go:210] Started VA processing "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:56:24.922112       1 csi_handler.go:224] CSIHandler: processing VA "csi-e163344c3199f18f0b7b60a59701a18c5eb7b4e96ec18b9f0fba85792a2a91fe"
I1102 15:56:24.922122       1 csi_handler.go:246] "csi-e163344c3199f18f0b7b60a59701a18c5eb7b4e96ec18b9f0fba85792a2a91fe" is already attached
I1102 15:56:24.922127       1 controller.go:210] Started VA processing "csi-01f42102068772eb39c9ed27a41e1127cc21851b529683598f8a8163730852cb"
I1102 15:56:24.922134       1 csi_handler.go:224] CSIHandler: processing VA "csi-01f42102068772eb39c9ed27a41e1127cc21851b529683598f8a8163730852cb"
I1102 15:56:24.922141       1 csi_handler.go:246] "csi-01f42102068772eb39c9ed27a41e1127cc21851b529683598f8a8163730852cb" is already attached
I1102 15:56:24.922132       1 csi_handler.go:240] CSIHandler: finished processing "csi-e163344c3199f18f0b7b60a59701a18c5eb7b4e96ec18b9f0fba85792a2a91fe"
I1102 15:56:24.922167       1 controller.go:210] Started VA processing "csi-33a9b3c1a8f8398519af1cf33367ea8de2d3c7c7553fd9567f11e00838443d0c"
I1102 15:56:24.922172       1 csi_handler.go:224] CSIHandler: processing VA "csi-33a9b3c1a8f8398519af1cf33367ea8de2d3c7c7553fd9567f11e00838443d0c"
I1102 15:56:24.922176       1 csi_handler.go:246] "csi-33a9b3c1a8f8398519af1cf33367ea8de2d3c7c7553fd9567f11e00838443d0c" is already attached
I1102 15:56:24.922113       1 csi_handler.go:224] CSIHandler: processing VA "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:56:24.922189       1 csi_handler.go:246] "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207" is already attached
I1102 15:56:24.922194       1 csi_handler.go:240] CSIHandler: finished processing "csi-36280dcf41032faec4cfcfbbb5e94552a0f8ffe3071f17a8274e1b5615b66207"
I1102 15:56:24.922162       1 csi_handler.go:240] CSIHandler: finished processing "csi-01f42102068772eb39c9ed27a41e1127cc21851b529683598f8a8163730852cb"
I1102 15:56:24.922181       1 csi_handler.go:240] CSIHandler: finished processing "csi-33a9b3c1a8f8398519af1cf33367ea8de2d3c7c7553fd9567f11e00838443d0c"
I1102 15:56:25.935436       1 leaderelection.go:281] successfully renewed lease kube-system/external-attacher-leader-csi-exoscale-com

And no error in my plugin, I can confirm by logging every call only Publish is called...
No Unpublish

I1102 15:45:59.658158       1 driver.go:63] driver: csi.exoscale.com version: v0.0.1
I1102 15:45:59.680882       1 main.go:50] NewDriver OK
I1102 15:45:59.680902       1 driver.go:113] Removing existing socket if existing
I1102 15:45:59.681244       1 driver.go:170] CSI server started on unix:///var/lib/csi/sockets/pluginproxy/csi.sock
I1102 15:45:59.765144       1 identity.go:18] GetPluginInfo called
I1102 15:45:59.766166       1 identity.go:50] GetPluginCapabilities called
I1102 15:45:59.767422       1 controller.go:360] ControllerGetCapabilities
I1102 15:45:59.846812       1 identity.go:18] GetPluginInfo called
I1102 15:45:59.847251       1 identity.go:50] GetPluginCapabilities called
I1102 15:45:59.848611       1 controller.go:360] ControllerGetCapabilities
I1102 15:45:59.947297       1 identity.go:18] GetPluginInfo called
I1102 15:45:59.949470       1 controller.go:360] ControllerGetCapabilities
I1102 15:46:00.114314       1 identity.go:18] GetPluginInfo called
I1102 15:46:00.115432       1 identity.go:50] GetPluginCapabilities called
I1102 15:46:00.117423       1 controller.go:360] ControllerGetCapabilities
I1102 15:46:00.118705       1 controller.go:360] ControllerGetCapabilities
I1102 15:46:00.155076       1 identity.go:18] GetPluginInfo called
I1102 15:50:34.034830       1 controller.go:99] CreateVolume
I1102 15:51:59.519745       1 controller.go:193] ControllerPublishVolume
I1102 15:52:03.144165       1 controller.go:193] ControllerPublishVolume

I did more test, I can only make it call ControllerUnpublishVolume by deleting volumeattachments.storage.k8s.io with kubectl.

kubectl delete volumeattachments.storage.k8s.io csi-caa6f1181454e8ad08442c180c0ec7cea4ac26b9fefd756dd258fb7db62969f8

Then Unpublish is called and volume is deleted correctly.

We are agreed this workflow should all work from deleting the PVC directly... ?

Cross-posting from slack.

ControllerUnpublish for a volume should be called automatically when the last Pod that uses the volume is deleted from the node and the volume is unmounted. I.e. both NodeUnpublish and NodeUnstage must succeed (if you implement NodeStage).
Kubelet reports successful NodeUnpublish + NodeUnstage by removing the volume from node.status.volumesInUse. This is signal to kube-controller-manager to delete the VolumeAttachment and that is signal to the external-attacher to call ControllerUnpublish.

Check your node.status, is the volume still in volumesInUse?

You should not delete VolumeAttachments manually.

Both NodeStage/Unstage and NodePublish/Unpublish are implemented and return no error.
In the unpublish I'm using CleanupMountPoint from "k8s.io/mount-utils" and it's returning a warning

W1108 16:27:33.103813       1 mount_helper_common.go:142] Warning: "/var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount" is not a mountpoint, deleting

Other than that:

During my mount:

I1108 15:59:35.710678       1 node.go:38] NodeStageVolume
I1108 15:59:35.710891       1 node.go:67] volume a6ab88d4-fa1b-4c87-bc00-44c979f2d2be has device path /dev/disk/by-id/virtio-a6ab88d4-fa1b-4c87-b
I1108 15:59:35.711349       1 node.go:100] Volume a6ab88d4-fa1b-4c87-bc00-44c979f2d2be will be mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/csi.exoscale.com/12993f2089ac9c88066ce7376034ddd5a94f0643da19b9bb058379a36d0fcdc3/globalmount with type ext4 and options
I1108 15:59:35.711368       1 node.go:107] Volume a6ab88d4-fa1b-4c87-bc00-44c979f2d2be has been mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/csi.exoscale.com/12993f2089ac9c88066ce7376034ddd5a94f0643da19b9bb058379a36d0fcdc3/globalmount with type ext4 and options
I1108 15:59:35.714011       1 node.go:380] NodeGetCapabilities
I1108 15:59:35.717786       1 node.go:380] NodeGetCapabilities
I1108 15:59:35.718591       1 node.go:380] NodeGetCapabilities
I1108 15:59:35.719578       1 node.go:160] NodePublishVolume
I1108 15:59:35.720180       1 mount_linux.go:243] Detected OS without systemd
I1108 15:59:35.720193       1 mount_linux.go:218] Mounting cmd (mount) with arguments (-t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/csi.exoscale.com/12993f2089ac9c88066ce7376034ddd5a94f0643da19b9bb058379a36d0fcdc3/globalmount /var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount)
I1108 15:59:35.724064       1 mount_linux.go:218] Mounting cmd (mount) with arguments (-t ext4 -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/csi.exoscale.com/12993f2089ac9c88066ce7376034ddd5a94f0643da19b9bb058379a36d0fcdc3/globalmount /var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount)

and during my unmount:

I1108 16:27:32.598496       1 node.go:114] NodeUnstageVolume
I1108 16:27:33.099237       1 node.go:294] NodeUnpublishVolume
I1108 16:27:33.099397       1 mount_helper_common.go:107] "/var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount" is a mountpoint, unmounting
I1108 16:27:33.099419       1 mount_linux.go:360] Unmounting /var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount
W1108 16:27:33.103813       1 mount_helper_common.go:142] Warning: "/var/lib/kubelet/pods/1b118825-d739-4bfb-90e4-eae830e2a59e/volumes/kubernetes.io~csi/pvc-8104a234-3f9f-4ac2-ac86-933d40da6a54/mount" is not a mountpoint, deleting
I1108 16:27:33.200446       1 node.go:380] NodeGetCapabilities

No error from the log perspective it should be OK, What it looks weird it this warning...

Concerning the node status, after deleting the pod totally, the volume is still attached to the node in the status:

  volumesAttached:
  - devicePath: ""
    name: kubernetes.io/csi/csi.exoscale.com^at-vie-1/a6ab88d4-fa1b-4c87-bc00-44c979f2d2be
  volumesInUse:
  - kubernetes.io/csi/csi.exoscale.com^at-vie-1/a6ab88d4-fa1b-4c87-bc00-44c979f2d2be

it should not.. be there anymore, maybe be it can come from my NodeStage/Unstage and NodePublish/Unpublish Implem, I try to figure out now if I'm doing something wrong.

Okay I solved the issue, as expected it was on my side!
In my Node Volumes stagging implementation I'm using FormatAndMount from "k8s.io/mount-utils", I was not handling error correctly, staging was returning success with no volume formatted and mounted...

Thanks for all your reply