Timeout waiting for mount issue
jasperweyne opened this issue ยท 6 comments
Hi!
Currently, I'm trying to setup this S3 driver for my volumes. To do that, I've first installed this driver through the helm chart, and then installed this FTP server chart with the following configuration:
persistentVolume:
enabled: true
size: 1Gi
storageClass: 'csi-s3'
subPath: '.'
That results in a newly created bucket, so the connection to the S3 provider seems to be fine. However, it seems that the volume cannot be mounted to the container, which prevents it from starting. The pod remains Pending, with the following two events looping:
Unable to attach or mount volumes: unmounted volumes=[storage-volume], unattached volumes=[istio-envoy istio-token istio-podinfo config-users workload-socket storage-volume istio-data workload-certs istiod-ca-cert kube-api-access-qlzp7 credential-socket]: timed out waiting for the condition
MountVolume.MountDevice failed for volume "pvc-ad425190-26eb-4aaa-95cc-efd013eee63a" : rpc error: code = Unknown desc = Timeout waiting for mount
Looking at the logs for the logs for the csi-s3 daemonset pod, the information below might be relevant. NodeGetCapabilities
and below keep repeating over time.
I0207 16:56:18.574620 1 driver.go:73] Driver: ru.yandex.s3.csi
I0207 16:56:18.574709 1 driver.go:74] Version: v1.34.7
I0207 16:56:18.574728 1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0207 16:56:18.574739 1 driver.go:93] Enabling volume access mode: MULTI_NODE_MULTI_WRITER
I0207 16:56:18.574949 1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//csi/csi.sock", Net:"unix"}
I0207 16:56:19.395745 1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0207 16:56:20.411210 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetInfo
I0207 16:56:38.246767 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0207 16:57:11.996537 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0207 16:57:12.003235 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0207 16:57:12.004432 1 utils.go:97] GRPC call: /csi.v1.Node/NodeGetCapabilities
I0207 16:57:12.005666 1 utils.go:97] GRPC call: /csi.v1.Node/NodeStageVolume
I0207 16:57:12.010102 1 geesefs.go:150] Starting geesefs using systemd: /var/lib/kubelet/plugins/ru.yandex.s3.csi/geesefs -f -o allow_other --endpoint https://leafcloud.store --setuid 65534 --setgid 65534 --memory-limit 1000 --dir-mode 0777 --file-mode 0666 pvc-ad425190-26eb-4aaa-95cc-efd013eee63a: /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/e9b118043e660c80fccf7add5f37c4752d52438711e009b967b4e343c8728a6f/globalmount
E0207 16:57:23.271053 1 utils.go:101] GRPC error: Timeout waiting for mount
Our cloud provider provisions kubernetes clusters through gardener, with Ubuntu 20.04 as a worker node OS. Any idea what I might have done wrong?
A workaround seems to be to use the s3fs mounter (with empty mountOptions), however since geesefs is recommended in the readme, I'd nonetheless like to know how to resolve this issue
Hi, this error message doesn't tell a lot about the error. Check journalctl on the host - maybe it doesn't succeed in executing geesefs on the host in your case?
In my case, this error was caused by a lack of Internet access for the Kubernetes cluster
Hi, this error message doesn't tell a lot about the error. Check journalctl on the host - maybe it doesn't succeed in executing geesefs on the host in your case?
hi,i have same question just like this. And the log of journalctl on the host is as below:
Jun 06 10:14:33 k8ssvr2 geesefs[3826408]: 2024/06/06 10:14:33.363886 s3.ERROR Unable to access 'test': x509: cannot validate certificate for 10.111.122.115 because it doesn't contain any IP SANs
Jun 06 10:14:33 k8ssvr2 geesefs[3826408]: 2024/06/06 10:14:33.680463 s3.WARNING code=RequestError msg=send request failed, err=Head "https://10.111.122.115:443/test/pvc-5cce410f-2a1e-4388-bac1-e3a1cf6d0ed7/adxknc1oulgcqow7i9xg79sr3vb2euov": x509: cannot validate certificate for 10.111.122.115 because it doesn't contain any IP SANs
Jun 06 10:14:33 k8ssvr2 geesefs[3826408]: 2024/06/06 10:14:33.680496 s3.WARNING code=RequestError msg=send request failed, err=Head "https://10.111.122.115:443/test/pvc-5cce410f-2a1e-4388-bac1-e3a1cf6d0ed7/adxknc1oulgcqow7i9xg79sr3vb2euov": x509: cannot validate certificate for 10.111.122.115 because it doesn't contain any IP SANs
Jun 06 10:14:33 k8ssvr2 geesefs[3826408]: 2024/06/06 10:14:33.680519 main.FATAL Mounting file system: Unable to access 'test': RequestError: send request failed
Jun 06 10:14:33 k8ssvr2 geesefs[3826408]: caused by: Head "https://10.111.122.115:443/test/pvc-5cce410f-2a1e-4388-bac1-e3a1cf6d0ed7/adxknc1oulgcqow7i9xg79sr3vb2euov": x509: cannot validate certificate for 10.111.122.115 because it doesn't contain any IP SANs
Jun 06 10:14:33 k8ssvr2 systemd[1]: geesefs-test_2fpvc_2d5cce410f_2d2a1e_2d4388_2dbac1_2de3a1cf6d0ed7.service: Main process exited, code=exited, status=1/FAILURE
โโ Subject: Unit process exited
โโ Defined-By: systemd
โโ Support: http://www.ubuntu.com/support
โโ
โโ An ExecStart= process belonging to unit geesefs-test_2fpvc_2d5cce410f_2d2a1e_2d4388_2dbac1_2de3a1cf6d0ed7.service has exited.
โโ
โโ The process' exit code is 'exited' and its exit status is 1.
Jun 06 10:14:33 k8ssvr2 umount[3826425]: umount: /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/5d420f0f28e3e64e3ebbee229154dcfa46feac002e7d004a65b7b50b78fb5acf/globalmount: not mounted.
Jun 06 10:14:33 k8ssvr2 systemd[1]: geesefs-test_2fpvc_2d5cce410f_2d2a1e_2d4388_2dbac1_2de3a1cf6d0ed7.service: Control process exited, code=exited, status=32/n/a
โโ Subject: Unit process exited
โโ Defined-By: systemd
โโ Support: http://www.ubuntu.com/support
โโ
โโ An ExecStopPost= process belonging to unit geesefs-test_2fpvc_2d5cce410f_2d2a1e_2d4388_2dbac1_2de3a1cf6d0ed7.service has exited.
โโ
โโ The process' exit code is 'exited' and its exit status is 32.
Jun 06 10:14:33 k8ssvr2 systemd[1]: geesefs-test_2fpvc_2d5cce410f_2d2a1e_2d4388_2dbac1_2de3a1cf6d0ed7.service: Failed with result 'exit-code'.
it's like a certificate question like #25 which is really strange because i set insecure=true
in the secret file
apiVersion: v1
kind: Secret
metadata:
name: csi-s3-secret
namespace: kube-system
stringData:
accessKeyID: "xxx"
secretAccessKey: "xxxxxx"
endpoint: https://10.111.122.115:443
insecure: "true"