"Too many levels of symbolic links issue" when working with Jupyterhub
Closed this issue · 3 comments
I have successfully deployed cvmfs-csi and can access the cvmfs repos while when I tried to mount it to a jupyterhub instance, it always failed with "Too many levels of symbolic links" error:
jovyan@jupyter-ee069xxx ~$ ls /my-cvmfs/
atlas.cern.ch cvmfs-config.cern.ch
jovyan@jupyter-ee069xxx ~$ ls /my-cvmfs/atlas.cern.ch
ls: cannot open directory '/my-cvmfs/atlas.cern.ch': Too many levels of symbolic links
I am wondering if this is cvmfs related issue or the jupyterhub related.
Some from cvmfs side reported the same error and suspect it's the automount/autofs fault so suggest to disable autofs. But I checked the working pod/container there is also autofs and it works well with accessing the cvmfs repos.
Below is the stanza of the config of jupyterhub:
singleuser:
storage:
...
extraVolumes:
- name: cvmfs-jhub-shared
persistentVolumeClaim:
claimName: cvmfs-jhub-shared
extraVolumeMounts:
- name: cvmfs-jhub-shared
mountPath: /my-cvmfs
and the pvc that works well for a pod created manually under the same namespace.
cvmfs-jhub-shared Bound pvc-e3f1a126-ed9e-4d77-8d49-5c8c8ce9b93a 1 ROX cvmfs 3h56m
I may need to raise this issue to jupyterhub however before doing that I just see if any insight from this channel.
Hi @ermingpei, this is indeed related to autofs. The cause is most likely volumeMounts
missing mountPropagation: HostToContainer
. See https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md#cvmfs-automounts for reference. Automounts need this.
I'm not familiar with Jupyterhub, but I'd try setting:
extraVolumeMounts:
- name: cvmfs-jhub-shared
mountPath: /my-cvmfs
mountPropagation: HostToContainer
FYI, should this not work, the "old" way of having one PVC per CVMFS repo is still possile: https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md#example-mounting-single-cvmfs-repository-using-repository-parameter
Please let us know if this solves your issue.
Cool, thanks! I'm closing this issue then. If you have more questions, please feel free to reopen or open a new one.