The directory /private is not writable
joshuacox opened this issue · 8 comments
I had a similar issue with the redis container on my setup, I needed to change its stanza to:
redis:
enabled: true
persistence:
enabled: true
storageClass: openebs-lvmpv
size: 8Gi
volumePermissions:
enabled: true
full values.yaml
Is there a similar volumePermissions
stanza I can add to the drupal container? I did try just adding that exact stanza to the drupal with no luck.
I am using the lvm-localpv driver for openEBS. But all looks to be okay there:
➜ drupal git:(master) ✗ kubectl get sc openebs-lvmpv
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-lvmpv (default) local.csi.openebs.io Delete Immediate false 78m
➜ drupal git:(master) ✗ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
drupal-drupal Bound pvc-d1d2b924-5138-4225-8fc6-1ca42d2ef3f2 8Gi RWO openebs-lvmpv 12m
drupal-mysql Bound pvc-ec368551-cc28-48eb-b22e-edf41c971fcc 8Gi RWO openebs-lvmpv 12m
drupal-nginx Bound pvc-c90ca6d5-427d-487a-9c6b-fa88fdd05de6 8Gi RWO openebs-lvmpv 12m
redis-data-drupal-redis-master-0 Bound pvc-73180e38-cb7b-4a60-bbc6-1e32e78fbae3 8Gi RWO openebs-lvmpv 12m
redis-data-drupal-redis-slave-0 Bound pvc-109dea85-656d-4d41-baff-0e94d45c55d9 8Gi RWO openebs-lvmpv 12m
redis-data-drupal-redis-slave-1 Bound pvc-6dd2c58b-8373-4bab-9aee-e5033c58b745 8Gi RWO openebs-lvmpv 11m
➜ drupal git:(master) ✗ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-109dea85-656d-4d41-baff-0e94d45c55d9 8Gi RWO Delete Bound default/redis-data-drupal-redis-slave-0 openebs-lvmpv 12m
pvc-6dd2c58b-8373-4bab-9aee-e5033c58b745 8Gi RWO Delete Bound default/redis-data-drupal-redis-slave-1 openebs-lvmpv 11m
pvc-73180e38-cb7b-4a60-bbc6-1e32e78fbae3 8Gi RWO Delete Bound default/redis-data-drupal-redis-master-0 openebs-lvmpv 12m
pvc-c90ca6d5-427d-487a-9c6b-fa88fdd05de6 8Gi RWO Delete Bound default/drupal-nginx openebs-lvmpv 12m
pvc-d1d2b924-5138-4225-8fc6-1ca42d2ef3f2 8Gi RWO Delete Bound default/drupal-drupal openebs-lvmpv 12m
pvc-ec368551-cc28-48eb-b22e-edf41c971fcc 8Gi RWO Delete Bound default/drupal-mysql openebs-lvmpv 12m
➜ drupal git:(master) ✗ kubectl get po
NAME READY STATUS RESTARTS AGE
drupal-765f49855f-lf5vc 3/3 Running 0 17m
drupal-mysql-7b7769b55d-n24sd 2/2 Running 0 17m
drupal-nginx-6f57fc6dd6-zw794 2/2 Running 0 17m
drupal-redis-master-0 2/2 Running 0 17m
drupal-redis-slave-0 2/2 Running 0 17m
drupal-redis-slave-1 2/2 Running 0 16m
drupal-varnish-787c7f8cfc-6bs77 2/2 Running 0 17m
It seems to me that whatever the redis pod is doing with:
volumePermissions:
enabled: true
The drupal pod needs to do the same thing? I assume a chown or chmod, or perhaps a change of user.
This might be fixable with an initContainer, but I'm having trouble getting the volumes in the initContainer
@joshuacox Can you try something like this in your values:
drupal:
initContainers:
- name: set-volume-permissions
image: 'alpine:3.10'
command:
- chown
- '-R'
- 'www-data:www-data'
- /files/public
- /files/private
volumeMounts:
- name: files-public
mountPath: /files/public
- name: files-private
mountPath: /files/private
k logs drupal-75655d98f4-hmftr set-volume-permissions
chown: unknown user/group www-data:www-data
Hmm, how about numerically (www-data is 84:
drupal:
initContainers:
- name: set-volume-permissions
image: 'alpine:3.10'
command:
- chown
- '-R'
- '84:84'
- /files/public
- /files/private
volumeMounts:
- name: files-public
mountPath: /files/public
- name: files-private
mountPath: /files/private
k logs drupal-84f7f5f8bb-pkh97 set-volume-permissions
chown: /files/public: Operation not permitted
chown: /files/public: Operation not permitted
chown: /files/private: Operation not permitted
chown: /files/private: Operation not permitted
I tried adding another init container that simply did whoami
k logs drupal-6cb7d78fd7-jkpq4 whoami
whoami: unknown uid 82
which I think comes from this stanza:
securityContext:
fsGroup: 82
runAsUser: 82
runAsGroup: 82
I cannot place one of those stanzas on the initContainer
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.initContainers[0].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext
Oh! Right! I forgot that it won't run as root by default.
drupal:
initContainers:
- name: set-volume-permissions
image: 'alpine:3.10'
command:
- chown
- '-R'
- 'www-data:www-data'
- /files/public
- /files/private
securityContext:
runAsUser: 0
volumeMounts:
- name: files-public
mountPath: /files/public
- name: files-private
mountPath: /files/private
That should run that specific init container as root.
I have one change to make on your stanza %s/www-data/84/g
or:
initContainers:
- name: set-volume-permissions
image: 'alpine:3.10'
command:
- chown
- '-R'
- '84:84'
- /files/public
- /files/private
volumeMounts:
- name: files-public
mountPath: /files/public
- name: files-private
mountPath: /files/private
securityContext:
runAsUser: 0
And there is a PR as well, I would happily make any suggested changes into that PR as well.
Oh thanks! Yeah that makes sense, I guess that container doesn't know which uid/gid the www-data user/group is since it's pulling down just a pure alpine container. Thanks for the PR, I'll take a look when I have a bit more time to review!
hold on, rethinking this. I think my original expectation is probably the desired result. i.e. add:
volumePermissions:
enabled: true
to the drupal block and have that init container added automagically.
I just tested on my test lab, and all looks to be good.