static PersistentVolume with s3fs fails immediately on initial mount
Opened this issue · 1 comments
schlichtanders commented
The pod shows FailedMount in its describe logs. More precisely
MountVolume.MountDevice failed for volume "myvolume" : rpc error: code = Unknown desc = stat /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/6906bd35218cf1d989e23a82b60f291f35eb9b7412b0038d021fb75d3c10dc24/globalmount: software caused connection abort
Here the yaml for the PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: MyPV
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: claimname
namespace: claimnamespace
csi:
controllerPublishSecretRef:
name: csi-s3-secret
namespace: kube-system
driver: ru.yandex.s3.csi
nodePublishSecretRef:
name: csi-s3-secret
namespace: kube-system
nodeStageSecretRef:
name: csi-s3-secret
namespace: kube-system
volumeAttributes:
capacity: 10Gi
mounter: s3fs
options: ""
volumeHandle: mybucket/myfolder
persistentVolumeReclaimPolicy: Retain
storageClassName: csi-s3
volumeMode: Filesystem
schlichtanders commented
It seems that s3fs is actually currently buggy as multiple users commented on #16
(if someone asks, I actually want to try out s3fs because I am also running into chmod problems with geesefs)