Help getting CEPHFS working
davesargrad opened this issue · 3 comments
I am trying to get CEPHFS working. The procedure I am following is the one found here. The procedure follows the section on RBD.
I've struggled with the process, and I have one outstanding issue. I am documenting the process in this issue, hopefully for the benefit of others, but also because I'd like help relative to the final issue.
I already have a CEPH cluster up, and a seperate K8S cluster, and running.
The steps are as follows:
- Create the namespace: kubectl create ns cephfs
- Create the secret: kubectl create secret generic ceph-secret-admin --from-literal=key="AQDJtspdXMyJLRAAZrRBzSyGR2rG5UqLHuDnAw==" -n cephfs
- Create the provisioner: kubectl create -n cephfs -f Ceph-FS-Provisioner.yaml
- Create the storage class: kubectl create -f Ceph-FS-StorageClass.yaml
- Create a PVC: kubectl create -f Ceph-FS-PVC.yaml
I've updated the Ceph-FS-Provisioner.yaml to be consistent with K8S 1.16 (e.g. Deployment no longer in extensions/v1beta1, and now has a selector tag)
Further details as follows. The key I am using in step 2:
The creation of the provisioner (step 3):
The creation of the storage class (step 4):
Note that this storage class is defined as follows:
Specifically it uses a claimRoot /pvc-volumes
When I try to create the PVC it never binds.
I dont quite understand the claimRoot /pvc-volumes. I have not exported this from CEPH. I am guessing that I need to based on the comment I see here
Do I need to export "/pvc-volumes" and if so, does someone know the command for this?
Thanks.
Hi, look at this:
#941 (comment)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
@kkostin Ty. I eventually got CEPH RBD working. If I ever need to revisit CEPH FS then I am sure I'll reference this issue.
For now I'll close it.