kubernetes-retired/external-storage

[cephfs-provisioner] - unable to provision claim

SockenSalat opened this issue · 1 comments

when I try to provision the claim from the examples I get 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "cephfs": exit status 1. But folders matching the generic share name are created in the ceph-cluster.

env

centos 7
kubernetes 1.17.2
ceph-commons 2:12.2.13-0.el7

details

I0210 09:03:28.814601       1 controller.go:987] provision "default/claim1" class "cephfs": started
I0210 09:03:28.819000       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"claim1", UID:"633462d2-a506-41ee-b865-f72fbb37055b", APIVersion:"v1", ResourceVersion:"739088", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/claim1"
E0210 09:03:29.028006       1 cephfs-provisioner.go:158] failed to provision share "kubernetes-dynamic-pvc-342d001d-4be4-11ea-b12e-d219074b8fbf" for "kubernetes-dynamic-user-342d0082-4be4-11ea-b12e-d219074b8fbf", err: exit status 1, output: Traceback (most recent call last):
  File "/usr/local/bin/cephfs_provisioner", line 364, in <module>
    main()
  File "/usr/local/bin/cephfs_provisioner", line 358, in main
    print cephfs.create_share(share, user, size=size)
  File "/usr/local/bin/cephfs_provisioner", line 228, in create_share
    volume = self.volume_client.create_volume(volume_path, size=size, namespace_isolated=not self.ceph_namespace_isolation_disabled)
  File "/lib/python2.7/site-packages/ceph_volume_client.py", line 641, in create_volume
    self.fs.setxattr(path, 'ceph.dir.layout.pool_namespace', namespace, 0)
  File "cephfs.pyx", line 988, in cephfs.LibCephFS.setxattr (/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/13.2.1/rpm/el7/BUILD/ceph-13.2.1/build/src/pybind/cephfs/pyrex/cephfs.c:10498)
cephfs.Error: (13, 'error in setxattr: error code 13')
W0210 09:03:29.028133       1 controller.go:746] Retrying syncing claim "default/claim1" because failures 2 < threshold 15
E0210 09:03:29.028200       1 controller.go:761] error syncing claim "default/claim1": failed to provision volume with StorageClass "cephfs": exit status 1
I0210 09:03:29.028244       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"claim1", UID:"633462d2-a506-41ee-b865-f72fbb37055b", APIVersion:"v1", ResourceVersion:"739088", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "cephfs": exit status 1

a folder kubernetes/kubernetes-dynamic-pvc-342d001d-4be4-11ea-b12e-d219074b8fbf has been created.

The problems was insuficcent permissions on the user

ceph auth get-or-create client.autoclaim mon 'allow rwx' mds 'allow rwp, allow rw path=/autoclaim/log' osd 'allow rwx pool=cephfs_data' did the trick and all works fine now.