NFS-Client creates PV directories with no permission
Nithinbs18 opened this issue · 4 comments
Dear team,
Greetings!!
I have created a windows share, and I can mount it into any node of my cluster workers and I can write data. So there is no connectivity issue. I want to use this share as a NFS storage for which I have assigned all possible read+write access, included it to Everyone. I tested this out by creating a PV and a PVC the creation and binding are successfully done, but every time I use this storage class for other application the dynamically created PV when mounted to the container inside the pod it is missing the permission it has no permissions (d---------). I created the nfs-client using the stable chart with the below configuration.
---
replicaCount: 1
strategyType: Recreate
image:
repository: quay.io/external_storage/nfs-client-provisioner
tag: latest
pullPolicy: IfNotPresent
nfs:
server: **.**.**.**
path: /persistent-volume
mountOptions:
storageClass:
create: true
defaultClass: false
name: nfs-client
allowVolumeExpansion: true
reclaimPolicy: Delete
archiveOnDelete: true
rbac:
create: true
podSecurityPolicy:
enabled: false
serviceAccount:
create: true
name:
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
Please let me know if there is something that is missing or if there is a way by which the storage class can assign required permission automatically.
Thank you very much in advance.
Regards,
Nithin
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.