cloudfoundry/garden

readonly mounts of fuse mounts cause container create to fail

Closed this issue · 6 comments

Todos

Please try the following before submitting the issue:

  • [ n/a] Upgrade Concourse if you are observing a failure in a test
  • [ X] Use latest BOSH stemcell if the problem occurred in a BOSH VM

Description

If we pass a fuse mounted directory to garden, asking it to make a readonly mount, then garden will fail to create the container, and the app will crash. This has been an issue for quite some time, but in the past we worked around it by doing the fuse mount read-only, and then asking garden for a regular mount. The down side to that approach is that the application ends up getting told it has a rw mount when it really has an ro mount. We now have a customer who insists that our workaround is unacceptable for them, so we would like to see this get fixed or passed along to whatever library is responsible for the root cause.

Incidentally, when I test manually, I can create a readonly bind mount of a fuse mount with no problem. So maybe this is specific to read only bind mounting into a container namespace?

linked persi bug: #155418775
zendesk customer issue: https://discuss.zendesk.com/agent/tickets/76823

Logging and/or test output

This appears to be the crux of it:

{"timestamp":"1520633654.921634912","source":"guardian","message":"guardian.create.containerizer-create.runtime-create-failed","log_level":2,"data":{"error":"runc run: exit status 1: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/var/vcap/data/volumes/nfs/e2c87a5e-14f9-48d7-a244-1a7f3a596238-ea66b743aad04ca340973ba00b6cfb40\\\\\\\" to rootfs \\\\\\\"/var/vcap/data/grootfs/store/unprivileged/images/2d563f48-0faf-48e2-7d63-d1b8/rootfs\\\\\\\" at \\\\\\\"/var/vcap/data/grootfs/store/unprivileged/images/2d563f48-0faf-48e2-7d63-d1b8/rootfs/var/vcap/data/e2c87a5e-14f9-48d7-a244-1a7f3a596238\\\\\\\" caused \\\\\\\"operation not permitted\\\\\\\"\\\"\"\n","handle":"2d563f48-0faf-48e2-7d63-d1b8","session":"901.3"}}

Steps to reproduce

The steps I did to not reproduce the issue in a command line were something like this:

go get github.com/cloudfoundry/mapfs
mkdir orig
chmod 777 orig/
mkdir mapmount
chmod 777 mapmount
mkdir bindmount
chmod 777 bindmount/
mapfs -uid 1000 -gid 1000 mapmount/ orig/ &
touch mapmount/somethingelse
mount --bind mapmount/ bindmount/
touch bindmount/somethingentirelyelse # this works
mount -o remount,ro bindmount/
touch bindmount/somethingaltogetherentirelyelse # this correctly fails because the mount is read-only

We generally reproduce this in the context of diego & nfs-volume-release. If you want to go that route you can grab this version of nfs-volume-release to get something with the workaround disabled.

Probably easier is to just build or install the fuse driver of your choosing like mapfs and then use it to create a mountpoint and feed that mountpoint into garden as a ro mount.

  • Garden Linux or Guardian release version: garden-runc/1.11.1
  • Linux kernel version (and other stuff): Linux version 4.4.0-116-generic (buildd@lcy01-amd64-023) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.4) ) #140~14.04.1-Ubuntu SMP Fri Feb 16 09:25:20 UTC 2018
  • Concourse version: n/a
  • Go version: 1.10

We have created an issue in Pivotal Tracker to manage this:

https://www.pivotaltracker.com/story/show/155869620

The labels on this github issue will be updated when the story is started.

oh hey--I just noticed that i am logged in as the "diego team"....that's foolish of me. I will log out forthwith

here's a link to the slack conversation we had in the garden channel about this issue way back in march of 2017:
https://cloudfoundry.slack.com/archives/C033RE5D6/p1488905748014313

@julian-hj @julz can this be closed now?

@goonzoid @julz yes, I think so. We're sort of sitting on our hands until y'all cut a new release of garden-runc so that we don't have to update our pipeline with dev builds, but I expect that it should be fine to close it and we can open a new one if necessary.

I'd close it meself, except that I erroneously filed it while logged into github as "cf-diego" and I think it would be better to never log in as that account ever again 😬

closing since I have permissions.