cloudfoundry-incubator/quarks-operator

Be able to share mounts across some containers on volume

Opened this issue · 1 comments

Is your feature request related to a problem? Please describe.
Context: attempting to implement NFS-persi for kubecf.

For kubecf, we would like to be able to implement diego volume drivers (e.g. nfs-volume); this involves having something mount things in /var/vcap/data/volumes/nfs/… (the ephemeral volume) in one container, and expecting it to show up in a different container.

Describe the solution you'd like

  • I'd like some way to express that a job wants to make mountpoints (which, I believe, ~requires it to be privileged?), possibly in a specific volume.
  • I'd like some way to have those mounts reflected in other jobs that mount the same volume, possibly opting in to this behaviour.
  • I'd like the ephemeral volume to be the target of this.

Describe alternatives you've considered

  • Some way to execute multiple jobs in the same mount namespace.
    • This wouldn't actually work, because the jobs are from different BOSH releases, and therefore needs to use different base images.
  • Be able to opt in to process namespace sharing and do something magical with /proc/$pid/root

Additional context
cloudfoundry-incubator/kubecf#382
I have a local PoC hack (that isn't worth pushing, I don't think) using Bidirectional mount propagation in privileged containers, and HostToContainer everywhere else, to achieve the desired-ish behaviour (at least, it does the minimum I need). But that looks like a terrible idea (including the scary warning†), so I'm hoping there's a… less terrible solution that I can't think of.

† The warning on the mount propagation docs:

Caution: Bidirectional mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged Containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by Containers in Pods must be destroyed (unmounted) by the Containers on termination.

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/173068487

The labels on this github issue will be updated when the story is started.