Too open permissions for mounted secrets
stuszynski opened this issue · 2 comments
Hi. After a little dig into the Workflow I discovered that every application container that is running on deis/slugrunner
has an object store credentials volume attached to it. I know that a slugrunner
need an access to S3 storage to download a slug tarball, but shouldn't this be considered as a security issue that every user on this container (including application itself) has a read access to those files?
We thought that we could use a defaultMode
option in Kubernetes that restrict permissions for mounted volumes to a root
user, but it seems that both the init and execution processes of the slugrunner
are running as user slug
.
I'm certain that I made a similar comment long before we released v2, but I can't seem to find that now. I agree that we need to rethink this model.
Related (discussion against storing slugs separate from image): deis/controller#324
@bacongobbler Do you have any ideas how to mitigate this issue? Maybe there is a way to umount those credentials after slugrunner
done with init scripts. Eventually, maybe we can move those credentials to environmental variables and unset them after initialization?
Maybe I miss something but I wonder if this would be better if slugrunner
would start a /runner/init
script as a root
user and then just use su
to run an application as a slug
user. In that case, we could set a defaultMode
option to those mounts and restrict read permissions for non-root users.