Pod could not mount cephfs with external-storage
akumacxd opened this issue · 4 comments
kubernetes version
# skuba cluster status NAME STATUS ROLE OS-IMAGE KERNEL-VERSION KUBELET-VERSION CONTAINER-RUNTIME HAS-UPDATES HAS-DISRUPTIVE-UPDATES CAASP-RELEASE-VERSION master01 Ready master SUSE Linux Enterprise Server 15 SP1 4.12.14-197.37-default v1.16.2 cri-o://1.16.1 no no 4.1.2 worker01 Ready SUSE Linux Enterprise Server 15 SP1 4.12.14-197.37-default v1.16.2 cri-o://1.16.1 worker02 Ready SUSE Linux Enterprise Server 15 SP1 4.12.14-197.37-default v1.16.2 cri-o://1.16.1 worker03 Ready SUSE Linux Enterprise Server 15 SP1 4.12.14-197.37-default v1.16.2 cri-o://1.16.1
Error Info
# kubectl get pod NAME READY STATUS RESTARTS AGE test-pod 0/1 ContainerCreating 0 24m
# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf 1Gi RWX Delete Bound default/claim1 cephfs 52m # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE claim1 Bound pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf 1Gi RWX cephfs 53m
# kubectl describe pods test-pod ....... Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/13a887f9-2027-479b-9c11-9aeb05c2e23e/volumes/kubernetes.io~cephfs/pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf --scope -- mount -t ceph -o name=kubernetes-dynamic-user-02544d0f-7d8c-11ea-97bb-26e525b38aea,secret=AQA/a5RexFa2GhAAfZJP6UmyrruODkdWprNUBw== 192.168.2.40:6789,192.168.2.41:6789,192.168.2.42:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-02544ccb-7d8c-11ea-97bb-26e525b38aea /var/lib/kubelet/pods/13a887f9-2027-479b-9c11-9aeb05c2e23e/volumes/kubernetes.io~cephfs/pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf Output: Running scope as unit: run-r1dcccb3780994e0dba0c97d58582744c.scope couldn't finalize options: -34 Warning FailedMount 9m7s kubelet, worker02 MountVolume.SetUp failed for volume "pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf" : CephFS: mount failed: mount failed: exit status 1 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/13a887f9-2027-479b-9c11-9aeb05c2e23e/volumes/kubernetes.io~cephfs/pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf --scope -- mount -t ceph -o name=kubernetes-dynamic-user-02544d0f-7d8c-11ea-97bb-26e525b38aea,secret=AQA/a5RexFa2GhAAfZJP6UmyrruODkdWprNUBw== 192.168.2.40:6789,192.168.2.41:6789,192.168.2.42:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-02544ccb-7d8c-11ea-97bb-26e525b38aea /var/lib/kubelet/pods/13a887f9-2027-479b-9c11-9aeb05c2e23e/volumes/kubernetes.io~cephfs/pvc-1aeb24b3-9837-4f30-ba83-42a949c11dcf Output: Running scope as unit: run-r10a6fa08428a424c93ddbe2b965f7382.scope couldn't finalize options: -34
Ceph Cluster
# ceph -v ceph version 14.2.5-389-gb0f23ac248 (b0f23ac24801724d9a7da89c2684f2b02bc9a49b) nautilus (stable) # ceph auth list ...... client.kubernetes-dynamic-user-02544d0f-7d8c-11ea-97bb-26e525b38aea key: AQA/a5RexFa2GhAAfZJP6UmyrruODkdWprNUBw== caps: [mds] allow r,allow rw path=/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-02544ccb-7d8c-11ea-97bb-26e525b38aea caps: [mon] allow r caps: [osd] allow rw pool=cephfs_data namespace=fsvolumens_kubernetes-dynamic-pvc-02544ccb-7d8c-11ea-97bb-26e525b38aea .....
Directly mount CephFS
kubernetes-dynamic-user
# mount -t ceph 192.168.2.40,192.168.2.41,192.168.2.42:6789:/pvc-volumes/kubernetes/kubernetes-dynamic-pvc-02544ccb-7d8c-11ea-97bb-26e525b38aea /mnt/cephfs_client/ -o name=kubernetes-dynamic-user-02544d0f-7d8c-11ea-97bb-26e525b38aea,secret=AQA/a5RexFa2GhAAfZJP6UmyrruODkdWprNUBw==,rasize=1638 couldn't finalize options: -34
Admin
$ mount -t ceph 192.168.2.40,192.168.2.41,192.168.2.42:6789:/ /mnt/cephfs_client/ -o name=admin,secret=AQA9w4VdAAAAABAAHZr5bVwkALYo6aLVryt7YA==, rasize=16384 $ df -Th | grep cephfs 192.168.2.40,192.168.2.41,192.168.2.42:6789:/ ceph 4.3G 0 4.3G 0% /mnt/cephfs_client
# cat test-pod.yaml kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: busybox command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: pvc persistentVolumeClaim: claimName: claim1
I found that if I use centos OS I can mount it
# mount -t ceph -o name=kubernetes-dynamic-user-ec21dd39-7d95-11ea-9189-2ef6e2f9324c,secret=AQBx65Re7yQwBRAANqsYrg4/te6k2R5PVZryfg== 172.16.21.101:6789:/ /mnt/cephfstest/ [root@hwk8s-master1 ctwork]# uname -r 4.19.110-300.el7.x86_64 # df | grep ceph-test 172.16.21.101:6789:/ 5330165760 0 5330165760 0% /root/ctwork/ceph-test # cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31"
SUSE OS
admin:~ # mount -t ceph -o name=kubernetes-dynamic-user-e7118130-7d88-11ea-be30-6e29327075c2,secret=AQAJZpReZvy2HBAAw4jwxv65PVm2JB3hCWimnQ== node001:6789:/ /mnt/ couldn't finalize options: -34 admin:~ # uname -r 4.12.14-197.37-default admin:~ # uname -a Linux admin 4.12.14-197.37-default #1 SMP Fri Mar 20 14:56:51 UTC 2020 (e98f980) x86_64 x86_64 x86_64 GNU/Linux # cat /etc/SUSE-brand SLE VERSION = 15
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Thanks for reporting the issue!
This repo is no longer being maintained and we are in the process of archiving this repo. Please see kubernetes/org#1563 for more details.
If your issue relates to nfs provisioners, please create a new issue in https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner or https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.
Going to close this issue in order to archive this repo. Apologies for the churn and thanks for your patience! 🙏