Umount failed by kubelet in K8S 1.18.14 and ubuntu
jmmc-tools opened this issue · 18 comments
What happened:
kubet display errors to umount volume fuse and volume has not point of mount (if is located folder :
/var/lib/kubelet/pods/c5936d69-59ff-46e9-9054-50056b4b2d3d/volumes/azure~blobfuse/fuse
The folder has data , is the blobcontainer with its folder listed ok, but there is not any point of mount related with that... because never a pount of mount is created but if is local folder is connected with blobfuse data successfull, so the proccess of mount is not complete, so the umount always fails as well.
How to find errors? --> from agent node performs this tail:
tail -2000 /var/log/syslog |grep fuse
Feb 10 16:55:07 aks-memoptimized-28286210-vmss00000C kubelet[16774]: I0210 16:55:07.829276 16774 reflector.go:181] Stopping reflector *v1.Secret (0s) from object-"logstreaming"/"blobfusecreds"
Feb 10 16:55:07 aks-memoptimized-28286210-vmss00000C kubelet[16774]: I0210 16:55:07.874833 16774 reconciler.go:196] operationExecutor.UnmountVolume started for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/c5936d69-59ff-46e9-9054-50056b4b2d3d-fuse") pod "c5936d69-59ff-46e9-9054-50056b4b2d3d" (UID: "c5936d69-59ff-46e9-9054-50056b4b2d3d")
Feb 10 16:55:07 aks-memoptimized-28286210-vmss00000C kubelet[16774]: E0210 16:55:07.883581 16774 nestedpendingoperations.go:301] Operation for "{volumeName:flexvolume-azure/blobfuse/c5936d69-59ff-46e9-9054-50056b4b2d3d-fuse podName:c5936d69-59ff-46e9-9054-50056b4b2d3d nodeName:}" failed. No retries permitted until 2021-02-10 16:55:08.383547099 +0000 UTC m=+469.396197557 (durationBeforeRetry 500ms). Error: "UnmountVolume.TearDown failed for volume \"fuse\" (UniqueName: \"flexvolume-azure/blobfuse/c5936d69-59ff-46e9-9054-50056b4b2d3d-fuse\") pod \"c5936d69-59ff-46e9-9054-50056b4b2d3d\" (UID: \"c5936d69-59ff-46e9-9054-50056b4b2d3d\") : remove /var/lib/kubelet/pods/c5936d69-59ff-46e9-9054-50056b4b2d3d/volumes/azure~blobfuse/fuse: directory not empty"
Feb 10 16:55:08 aks-memoptimized-28286210-vmss00000C kubelet[16774]: I0210 16:55:08.476582 16774 reconciler.go:196] operationExecutor.UnmountVolume started for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/c5936d69-59ff-46e9-9054-50056b4b2d3d-fuse") pod "c5936d69-59ff-46e9-9054-50056b4b2d3d" (UID: "c5936d69-59ff-46e9-9054-50056b4b2d3d")
What you expected to happen:
mount and mount sucessfull like in K8s version 1.17.x (is ok in that version)
How to reproduce it:
create a pod with a node in k8s version (aks) 1.18.14 and 18.04.1-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux
volumeMounts:
- mountPath: /var/lib/folderNmounted
name: fuse
subPath: folderInBlob```
[..]
volumes:
- flexVolume:
driver: azure/blobfuse
options:
container: name-of-container
mountoptions: --file-cache-timeout-in-seconds=120
tmppath: /tmp/blobfuse
secretRef:
name: blobfusecreds
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
- Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.14", GitCommit:"5de7fd1f9555368a86eb0f8f664dc58055c17269", GitTreeState:"clean", BuildDate:"2021-01-18T09:31:01Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
- OS (e.g. from /etc/os-release): aks nodes 18.04.1-Ubuntu SMP x86_64 GNU/Linux
- Kernel (e.g. `uname -a`): 5.4.0-1036-azure #38~18.04.1-Ubuntu SMP Wed Jan 6 18:26:30 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- Others:
are you using latest 1.0.17
version? We did a workaround fix for this issue a few months ago:
#93
I have local files connected yet with blob, but is not any point of mount:
ls -la /var/lib/kubelet/pods/0d290342-e496-47b7-b04f-53943eba8d39/volumes/azure~blobfuse/fuse
total 12
drwxr-x--- 3 root root 4096 Feb 10 02:20 .
drwxr-x--- 3 root root 4096 Feb 10 02:20 ..
drwxr-x--- 3 root root 4096 Feb 10 02:20 data
So your latest fix, the IF could not be working waiting != 0 ...
if [ "$?" != "0" ]; then
echo "`date` EXEC: rm -r --one-file-system ${MNTPATH}" >> $LOG
rm -r --one-file-system "${MNTPATH}" >> $LOG 2>&1
fi
Because my file: cat /var/log/blobfuse-driver.log |grep -v Success
Only reports this kind of messages, nothing about " EXEC:rm ":
/blobfuse-influx -o allow_other --file-cache-timeout-in-seconds=120
Wed Feb 10 13:47:55 UTC 2021 EXEC: mkdir -p /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse
Wed Feb 10 13:47:55 UTC 2021 INF: AZURE_STORAGE_ACCESS_KEY is set
Wed Feb 10 13:47:55 UTC 2021 INF: export storage account - export AZURE_STORAGE_ACCOUNT=************
Wed Feb 10 13:47:55 UTC 2021 EXEC: blobfuse /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse --container-name=logstreaming-pro-blob-metrics --tmp-path=/tmp/blobfuse-influx -o allow_other --file-cache-timeout-in-seconds=120
what's the result of findmnt -n ${MNTPATH} 2>/dev/null | cut -d' ' -f1
?
The result is empty :
***@aks-*********-28286210-vmss000000:/#
***@aks-*********-28286210-vmss000000:/# findmnt --verbose /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse
***@aks-*********-28286210-vmss000000:/#
***@aks-*********-28286210-vmss000000:/#
But if you list the folder, is displayed remote blob files:
# ls -la /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse
total 12
drwxr-x--- 3 root root 4096 Feb 10 13:47 .
drwxr-x--- 3 root root 4096 Feb 10 13:47 ..
drwxr-x--- 2 root root 4096 Feb 10 13:47 data
So I update the continuous error message displayed in syslog for more information of this case:
Feb 15 08:52:23 aks-*********-28286210-vmss000000 kubelet[4301]: I0215 08:52:23.885457 4301 reconciler.go:196] operationExecutor.UnmountVolume started for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse") pod "0d290342-e496-47b7-b04f-53943eba8d39" (UID: "0d290342-e496-47b7-b04f-53943eba8d39")
Feb 15 08:52:23 aks-*********-28286210-vmss000000 kubelet[4301]: E0215 08:52:23.892043 4301 nestedpendingoperations.go:301] Operation for "{volumeName:flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse podName:0d290342-e496-47b7-b04f-53943eba8d39 nodeName:}" failed. No retries permitted until 2021-02-15 08:54:25.892004845 +0000 UTC m=+520881.230263167 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume \"fuse\" (UniqueName: \"flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse\") pod \"0d290342-e496-47b7-b04f-53943eba8d39\" (UID: \"0d290342-e496-47b7-b04f-53943eba8d39\") : remove /var/lib/kubelet/pods/0d290342-e496-47b7-b04f-53943eba8d39/volumes/azure~blobfuse/fuse: directory not empty"
The result is empty :
***@aks-*********-28286210-vmss000000:/# ***@aks-*********-28286210-vmss000000:/# findmnt --verbose /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse ***@aks-*********-28286210-vmss000000:/# ***@aks-*********-28286210-vmss000000:/#
But if you list the folder, is displayed remote blob files:
# ls -la /var/lib/kubelet/pods/579c2832-6e31-4dbf-bc65-85242cb42c04/volumes/azure~blobfuse/fuse total 12 drwxr-x--- 3 root root 4096 Feb 10 13:47 . drwxr-x--- 3 root root 4096 Feb 10 13:47 .. drwxr-x--- 2 root root 4096 Feb 10 13:47 data
So I update the continuous error message displayed in syslog for more information of this case:
Feb 15 08:52:23 aks-*********-28286210-vmss000000 kubelet[4301]: I0215 08:52:23.885457 4301 reconciler.go:196] operationExecutor.UnmountVolume started for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse") pod "0d290342-e496-47b7-b04f-53943eba8d39" (UID: "0d290342-e496-47b7-b04f-53943eba8d39") Feb 15 08:52:23 aks-*********-28286210-vmss000000 kubelet[4301]: E0215 08:52:23.892043 4301 nestedpendingoperations.go:301] Operation for "{volumeName:flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse podName:0d290342-e496-47b7-b04f-53943eba8d39 nodeName:}" failed. No retries permitted until 2021-02-15 08:54:25.892004845 +0000 UTC m=+520881.230263167 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume \"fuse\" (UniqueName: \"flexvolume-azure/blobfuse/0d290342-e496-47b7-b04f-53943eba8d39-fuse\") pod \"0d290342-e496-47b7-b04f-53943eba8d39\" (UID: \"0d290342-e496-47b7-b04f-53943eba8d39\") : remove /var/lib/kubelet/pods/0d290342-e496-47b7-b04f-53943eba8d39/volumes/azure~blobfuse/fuse: directory not empty"
could you run mount | grep blobfuse
to check again? it seems that azure~blobfuse/fuse/data
is not a blobfuse mount
mount | grep blobfuse
It's that, the folder is a blobfuse because I browse for all files like Azure portal with explorer, but if you run : mount | grep blobfuse , the results is empty againg, so the pod is not existing now, not mounted as a partition but local folder is connected yet to blobfuse. Could be a inode problems? I think is a low level problem here , right? Because the pod is not already exists in node:
/# mount |grep -i fuse
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
For that problem, I did tests with yaml deployment, and when I remove "--file-cache-timeout-in-seconds=120" , the blobfuse is mounted , is displayed with (mount | grep blobfuse) ... and pod has the folder mounted, but the folder is empty (not connected really to blob or not display successfully any files), so here there is another problem to investigate and I decided type again de original config to resolve.
mount | grep blobfuse
It's that, the folder is a blobfuse because I browse for all files like Azure portal with explorer, but if you run : mount | grep blobfuse , the results is empty againg, so the pod is not existing now, not mounted as a partition but local folder is connected yet to blobfuse. Could be a inode problems? I think is a low level problem here , right? Because the pod is not already exists in node:
/# mount |grep -i fuse fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
For that problem, I did tests with yaml deployment, and when I remove "--file-cache-timeout-in-seconds=120" , the blobfuse is mounted , is displayed with (mount | grep blobfuse) ... and pod has the folder mounted, but the folder is empty (not connected really to blob or not display successfully any files), so here there is another problem to investigate and I decided type again de original config to resolve.
Were you able to fix it?
not sure how to fix it, in that case, you can delete that pod with unmount failure directly.
I suspect it's due to subPath: folderInBlob
, would you not use subPath
and try again?
I suspect it's due to
subPath: folderInBlob
, would you not usesubPath
and try again?
When I remove subPath... same the volume is not mounted. I paste the node messages when I remove the pod:
Feb 22 13:16:44 aks-**********-28286210-vmss00000C kubelet[21384]: I0222 13:16:44.949941 21384 kubelet.go:1926] SyncLoop (DELETE, "api"): "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)"
Feb 22 13:17:15 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:15.892151 21384 kubelet.go:1955] SyncLoop (PLEG): "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)", event: &pleg.PodLifecycleEvent{ID:"d58cc83d-f65b-4d76-9416-266659a92a9a", Type:"ContainerDied", Data:"4cd9064ec1d91b43208b3c9933ed0f4ba466c21b7f3d054a6f98ff3c0c181641"}
Feb 22 13:17:15 aks-************-28286210-vmss00000C kubelet[21384]: I0222 13:17:15.892593 21384 kubelet.go:1955] SyncLoop (PLEG): "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)", event: &pleg.PodLifecycleEvent{ID:"d58cc83d-f65b-4d76-9416-266659a92a9a", Type:"ContainerDied", Data:"ae9efad5ea987e5aa26e646e8e5f679857a37884b5f2e62e26a78bd6364cbcc7"}
Feb 22 13:17:15 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:15.908516 21384 reflector.go:181] Stopping reflector *v1.Secret (0s) from object-"logstreaming"/"blobfusecreds"
Feb 22 13:17:15 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:15.971625 21384 reconciler.go:196] operationExecutor.UnmountVolume started for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/d58cc83d-f65b-4d76-9416-266659a92a9a-fuse") pod "d58cc83d-f65b-4d76-9416-266659a92a9a" (UID: "d58cc83d-f65b-4d76-9416-266659a92a9a")
Feb 22 13:17:15 aks-*************-28286210-vmss00000C kubelet[21384]: I0222 13:17:15.977750 21384 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "flexvolume-azure/blobfuse/d58cc83d-f65b-4d76-9416-266659a92a9a-fuse" (OuterVolumeSpecName: "fuse") pod "d58cc83d-f65b-4d76-9416-266659a92a9a" (UID: "d58cc83d-f65b-4d76-9416-266659a92a9a"). InnerVolumeSpecName "fuse". PluginName "flexvolume-azure/blobfuse", VolumeGidValue ""
Feb 22 13:17:16 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:16.071967 21384 reconciler.go:319] Volume detached for volume "fuse" (UniqueName: "flexvolume-azure/blobfuse/d58cc83d-f65b-4d76-9416-266659a92a9a-fuse") on node "aks-***********-28286210-vmss00000c" DevicePath ""
Feb 22 13:17:16 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:16.917291 21384 kubelet.go:1926] SyncLoop (DELETE, "api"): "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)"
Feb 22 13:17:16 aks-***********-28286210-vmss00000C kubelet[21384]: I0222 13:17:16.921308 21384 kubelet.go:1920] SyncLoop (REMOVE, "api"): "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)"
Feb 22 13:17:16 aks-************-28286210-vmss00000C kubelet[21384]: I0222 13:17:16.921342 21384 kubelet.go:2118] Failed to delete pod "test-blobfuse-4q68n_logstreaming(d58cc83d-f65b-4d76-9416-266659a92a9a)", err: pod not found
Coud be related this other bug with fuse driver in ubuntu? azure-storage-fuse#428
Coud be related this other bug with fuse driver in ubuntu? azure-storage-fuse#428
could be, would you set --log-level=LOG_DEBUG
in mountOptions and try get debugging info again? thanks.
Coud be related this other bug with fuse driver in ubuntu? azure-storage-fuse#428
could be, would you set
--log-level=LOG_DEBUG
in mountOptions and try get debugging info again? thanks.
Sorry I haven't had a time, I will try to update with results from this point.
On the other hand, the problem could be related with new containerd runtime for kubernetes 1.19-1.2x ? Here explains there is a different operation because Docker is not present, now is Containerd:
https://docs.microsoft.com/en-us/azure/aks/cluster-configuration#container-runtime-configuration
Hi, I face the same issue. But when I run with -d option I see this error.
Apr 21 15:41:01 aks-agentpool-81843983-vmss000002 blobfuse[28182]: Function azs_destroy, in file /home/amnguye/Desktop/azure-storage-fuse/blobfuse/utilities.cpp, line 523: azs_destroy called.
Apr 21 15:41:01 aks-agentpool-81843983-vmss000002 kubelet[5112]: E0421 15:41:01.882741 5112 driver-call.go:266] Failed to unmarshal output for command: mount, output: "Running scope as unit: run-r8641be8b9a02473ca466dd988a59a218.scope
Running scope as unit: run-r89cf977564c444fc985d39bada8adca5.scope
FUSE library version: 2.9.7
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.31
flags=0x03fffffb
max_readahead=0x00020000
INIT: 7.19
flags=0x00000031
max_readahead=0x00400000
max_write=0x00400000
max_background=128
congestion_threshold=96
unique: 2, success, outsize: 40
fuse: reading device: Invalid argument
{ \"status\": \"Failure\", \"message\": \"Failed to mount device /dev/ at /var/lib/kubelet/pods/e1a0e429-df65-40d1-bd22-883388f8ef8c/volumes/azure~blobfuse/blobstorage, accountname:*******, error log:Wed Apr 21 15:41:01 UTC 2021 EXEC: /usr/bin/systemd-run --scope -- blobfuse /var/lib/kubelet/pods/e1a0e429-df65-40d1-bd22-883388f8ef8c/volumes/azure~blobfuse/blobstorage --container-name=something --tmp-path=/tmp/something -o allow_other -d --file-cache-timeout-in-seconds=120 -o allow_other --log-level=LOG_DEBUG\" }
", error: invalid character 'R' looking for beginning of value
Apr 21 15:41:01 aks-agentpool-81843983-vmss000002 kubelet[5112]: W0421 15:41:01.882782 5112 driver-call.go:149] FlexVolume: driver call failed: executable: /etc/kubernetes/volumeplugins/azure~blobfuse/blobfuse, args: [mount /var/lib/kubelet/pods/e1a0e429-df65-40d1-bd22-883388f8ef8c/volumes/azure~blobfuse/blobstorage {"container":"something","kubernetes.io/fsType":"","kubernetes.io/pod.name":"******-5fb769cdfc-rh5c7","kubernetes.io/pod.namespace":"bc-v1","kubernetes.io/pod.uid":"e1a0e429-df65-40d1-bd22-883388f8ef8c","kubernetes.io/pvOrVolumeName":"blobstorage","kubernetes.io/readwrite":"rw","kubernetes.io/secret/accountconnectionstring":"RGVmYXVsdEVuZHBvaW50c1Byb3RvY29sPWh0dHBzO0FjY291bnROYW1lPWJjaW5zaWdodGJsb2I7QWNjb3VudEtleT1QdHA0Vk4wWGU2REYwZ1BabDNqZVBaOHhNdEZJUzlpMFM5YVFRMlpSTkg0RThSRUQ3SUh4SDNiMjlvWGttNFgrcFN6enVqcjlNTTc0ZnRRTmpMME9qZz09O0VuZHBvaW50U3VmZml4PWNvcmUud2luZG93cy5uZXQ=","kubernetes.io/secret/accountkey":"UHRwNFZOMFhlNkRGMGdQWmwzamVQWjh4TXRGSVM5aTBTOWFRUTJaUk5INEU4UkVEN0lIeEgzYjI5b1hrbTRYK3BTenp1anI5TU03NGZ0UU5qTDBPamc9PQ==","kubernetes.io/secret/accountname":"YmNpbnNpZ2h0YmxvYg==","kubernetes.io/serviceAccount.name":"default","mountoptions":"-d --file-cache-timeout-in-seconds=120 -o allow_other --log-level=LOG_DEBUG","tmppath":"/tmp/something"}], error: exit status 1, output: "Running scope as unit: run-r8641be8b9a02473ca466dd988a59a218.scope
Running scope as unit: run-r89cf977564c444fc985d39bada8adca5.scope
FUSE library version: 2.9.7
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 2, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.31
flags=0x03fffffb
max_readahead=0x00020000
INIT: 7.19
flags=0x00000031
max_readahead=0x00400000
max_write=0x00400000
max_background=128
congestion_threshold=96
unique: 2, success, outsize: 40
fuse: reading device: Invalid argument
{ \"status\": \"Failure\", \"message\": \"Failed to mount device /dev/ at /var/lib/kubelet/pods/e1a0e429-df65-40d1-bd22-883388f8ef8c/volumes/azure~blobfuse/blobstorage, accountname:********, error log:Wed Apr 21 15:41:01 UTC 2021 EXEC: /usr/bin/systemd-run --scope -- blobfuse /var/lib/kubelet/pods/e1a0e429-df65-40d1-bd22-883388f8ef8c/volumes/azure~blobfuse/blobstorage --container-name=something --tmp-path=/tmp/something -o allow_other -d --file-cache-timeout-in-seconds=120 -o allow_other --log-level=LOG_DEBUG\" }
"
Apr 21 15:41:01 aks-agentpool-81843983-vmss000002 kubelet[5112]: E0421 15:41:01.882916 5112 nestedpendingoperations.go:301] Operation for "{volumeName:azure/blobfuse/e1a0e429-df65-40d1-bd22-883388f8ef8c-blobstorage podName:e1a0e429-df65-40d1-bd22-883388f8ef8c nodeName:}" failed. No retries permitted until 2021-04-21 15:41:33.882893614 +0000 UTC m=+112885.696156115 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"blobstorage\" (UniqueName: \"azure/blobfuse/e1a0e429-df65-40d1-bd22-883388f8ef8c-blobstorage\") pod \"******-5fb769cdfc-rh5c7\" (UID: \"e1a0e429-df65-40d1-bd22-883388f8ef8c\") : invalid character 'R' looking for beginning of value"
Apr 21 15:41:08 aks-agentpool-81843983-vmss000002 kernel: [112947.713732] IPv4: martian source 10.240.0.4 from 192.0.2.100, on dev cbr0
Ubuntu: 18.04.5 LTS
Kubernetes: v1.19.9
Kernel version: 5.4.0-1043-azure
Container Runtime: containerd://1.5.0-beta.git31a0f92df+azure
is -d
mount options valid in blobfuse? @snachiap
Hi, With the mount Option, I added "-d" and I was able to see better logs. It would be helpful to go through this thread to see the compatibility problem between blobfuse and libfuse in details.
Solved for me! you need to upgrade AKS cluster nodepools or recreate new nodepool to obtain a node with new OS Ubuntu 18.04.5 instead of 18.04.1 which I believe this was the original problem... for example this problem is solved in:
Kernel Version: 5.4.0-1048-azure
OS Image: Ubuntu 18.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.14
Kubelet Version: v1.18.14
Kube-Proxy Version: v1.18.14
$ kubectl describe pod blobfuse-flexvol-installer-265db -n kube-system |grep Image
Image: mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.18
Note: You need to upgrade /reinstall flexvolume daemonset (blofuse-flexvol-installer) so you will obtain the latest version of image with driver.
Pending to check in AKS upgraded nodepools with K8s v 1.20.x ...