kubernetes-csi/csi-driver-nfs

Allow more templating substitutions in subDir

Opened this issue · 3 comments

Is your feature request related to a problem?/Why is this needed
Currently in subDir there are some substitutions allowed: pvc.metadata.name, pv.metadata.name and namespace.
Because (in a namespace) pvc.metadata.name HAS to be unique AND has DNS name requirements, directory names created are forced to be unnecessary long and are very limited in characters that can be used.
For example, having:

StorageClass name: smb, share: /SHARE, subDir: ${pvc.metadata.namespace}/${pvc.metadata.name}
StorageClass name: nfs, share: /SHARE, subDir: ${pvc.metadata.namespace}/${pvc.metadata.name}
PVC name: deployment1-nfs-data, storageClassName: nfs, namespace: test
PVC name: deployment1-nfs-shared, storageClassName: nfs, namespace: test
PVC name: deployment1-smb-data, storageClassName: smb, namespace: test
Deployment name: deployment1, namespace: test, volumes:
 - claimName: deployment1-nfs-data, mountPath: /data1
 - claimName: deployment1-nfs-shared, mountPath: /shared
 - claimName: deployment1-smb-data, mountPath: /data2

That would create for example on nfs following:
/SHARE/test/deployment1-nfs-data, /SHARE/test/deployment1-nfs-shared
where I would like to be able to create just:
/SHARE/test/data, /SHARE/test/shared

Describe the solution you'd like in detail
I suppose easiest would be to allow use of annotation for templating, like ${pvc.metadata.annotations[annotation_name]} or similar (similar as external-provisioner can reference secrets via pvc I guess?).

Describe alternatives you've considered
I cannot name PVC shorter due to clashed between different deployments and different storage classes. Also using inline/ephemeral volumes is not possible as CSIDriver specifies that it supports only Persistent volumes.
I could share some PVC between deployments, but that depends on accessMode as well, but that shaves off deployment1- from name only.
Another option is using PVC name like nfs and mounting using subPath, but then PVC/PV deletion removes too much.
One way I found is to use separate StorageClass for each pod with static subDir, but that is a bit of overkill/misuse I guess?
Unless I am trying to use this completely wrong and there is a better way that I did not consider?

I have a corporate requirement to mount share that has preexisting structure, roughly /SHARE/${namespace}/${deployment.name} and I cannot find a way to satisfy that requirement using Kubernetes while retaining functionality to create that directory dynamically if it is missing.

I see that there is a LOT of discussion in this area...
kubernetes-csi/external-provisioner#86
kubernetes-csi/external-provisioner#808
kubernetes-csi/csi-driver-smb#428
kubernetes-csi/csi-driver-smb#795
kubernetes-csi/csi-driver-smb#783

And this goes from 2018 ...
People argue that adding annotation support in subDir templating imposes a security risk while already existing secrets lookup poses same risk as it is possible to guess secret that has better credentials (like going from RO to RW). Misuse of StorageClass can lead to wipe of whole share as well (via Recycle/Delete, happened to me while testing), while properly configured subDir cannot grant access to different shares (just prefix it with namespace or even constant path depending on tenant)...
And in the meantime there is a number of proprietary provisioners that walk around this limitation. It can be made secure or insecure, all depends on user/administrator provided configuration.

Could you elaborate a bit more on "Misuse of StorageClass can lead to wipe of whole share as well (via Recycle/Delete) "?

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale