Token file for service account is missing
Closed this issue · 34 comments
I am not able to run the default Jenkins in OpenShift because service account's token is missing.
Version
$ oc version
oc version
oc v1.3.0-alpha.2+d3203f9
kubernetes v1.3.0+57fb9ac
features: Basic-Auth
Server https://10.34.129.138:8443
openshift v1.3.0-alpha.2+88b8a33
kubernetes v1.3.0+57fb9ac
Steps To Reproduce
- $ oc cluster up --version=latest
- $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-deployer latest 70629beb9fb4 9 hours ago 483.4 MB
openshift/origin latest 1f6cfe1949af 9 hours ago 483.4 MB
openshift/origin-pod latest bd2edbd2efa3 9 hours ago 1.591 MB
- $ oc process openshift//jenkins | oc create -f -
- $ oc status # jenkins deployment is failed after few (3) sec
- $ docker logs # on openshift/origin-deployer for jenkins
error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Current Result
Deployment fails. I have to go directly to Docker daemon to find the issue which it not reported in OpenShift.
error: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
Expected Result
No error.
@liggitt I am using Arch linux and I don't have SELinux there. It worked form me before, but it might be connected to some update between...
But if I use --version=v1.3.0-alpha.2
, it works for me. That's why I am thinking this might be bug in OpenShift itself.
$ docker info
Containers: 7
Running: 1
Paused: 0
Stopped: 6
Images: 3
Server Version: 1.11.2
Storage Driver: overlay
Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.6.4-1-ARCH
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 19.5 GiB
Name: lenovo-t450s
ID: TAUM:E4YJ:QE7O:F5Y5:ZHQX:4KG3:KWZP:G2Y5:DHV4:7CVQ:WT6I:7CKG
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
What is the output of this command on the host? findmnt -o +PROPAGATION
?
$ findmnt -o +PROPAGATION
TARGET SOURCE FSTYPE OPTIONS PROPAGATION
/ /dev/mapper/arch-root
│ ext4 rw,relatime,data=ordered shared
├─/proc proc proc rw,nosuid,nodev,noexec,relatime shared
│ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=36,pgrp=1,timeout=0,minproto=5,maxproto=5,direct shared
│ └─/proc/sys/fs/binfmt_misc binfmt_misc binfmt_misc rw,relatime shared
├─/sys sys sysfs rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,mode=755 shared
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd shared
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct shared
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory shared
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset shared
│ │ ├─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices shared
│ │ ├─/sys/fs/cgroup/net_cls cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls shared
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio shared
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids shared
│ │ └─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer shared
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/firmware/efi/efivars efivarfs efivarfs rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime shared
│ ├─/sys/kernel/config configfs configfs rw,relatime shared
│ └─/sys/fs/fuse/connections fusectl fusectl rw,relatime shared
├─/dev dev devtmpfs rw,nosuid,relatime,size=10214012k,nr_inodes=2553503,mode=755 shared
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev shared
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 shared
│ ├─/dev/mqueue mqueue mqueue rw,relatime shared
│ └─/dev/hugepages hugetlbfs hugetlbfs rw,relatime shared
├─/run run tmpfs rw,nosuid,nodev,relatime,mode=755 shared
│ ├─/run/user/120 tmpfs tmpfs rw,nosuid,nodev,relatime,size=2044224k,mode=700,uid=120,gid=120 shared
│ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatime,size=2044224k,mode=700,uid=1000,gid=1000 shared
│ └─/run/user/1000/gvfs gvfsd-fuse fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 shared
├─/mnt/nas systemd-1 autofs rw,relatime,fd=30,pgrp=1,timeout=60,minproto=5,maxproto=5,direct shared
│ └─/mnt/nas sshfs@172.16.20.2:/mnt/se
│ fuse.sshfs rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other shared
├─/boot /dev/sda1 vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro shared
├─/tmp tmpfs tmpfs rw,nosuid,nodev shared
├─/var/lib/origin/openshift.local.volumes/pods/2240aeed-5a58-11e6-bf25-507b9dab1f38/volumes/kubernetes.io~secret/deployer-token-63j3z
│ tmpfs tmpfs rw,relatime shared
├─/var/lib/origin/openshift.local.volumes/pods/34fdcc8d-5a58-11e6-bf25-507b9dab1f38/volumes/kubernetes.io~secret/deployer-token-juf3i
│ tmpfs tmpfs rw,relatime shared
└─/var/lib/origin/openshift.local.volumes/pods/228d749b-5a58-11e6-bf25-507b9dab1f38/volumes/kubernetes.io~secret/deployer-token-63j3z
tmpfs tmpfs rw,relatime shared
Can you provide your oc cluster up
command line you used, as well as its output?
$ oc cluster up --version=latest
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking for existing OpenShift container ...
Deleted existing OpenShift container
-- Checking for openshift/origin:latest image ... OK
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ...
WARNING: Binding DNS on port 8053 instead of 53, which may be not be resolvable from all clients.
-- Checking type of volume mount ...
Using nsenter mounter for OpenShift volumes
-- Checking Docker version ... OK
-- Creating host directories ... OK
-- Finding server IP ...
Using 10.34.129.138 as the server IP
-- Starting OpenShift container ...
Creating initial OpenShift configuration
Starting OpenShift using container 'origin'
Waiting for API server to start listening
OpenShift server started
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://10.34.129.138:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin
Same issue for me. Actually, not even the registry deploys properly. Same OS as tnozicka.
oc version
oc v1.3.0-alpha.2
kubernetes v1.3.0-alpha.1-331-g0522e63
docker images
openshift/origin-deployer latest 8f9a93a02ed2 4 hours ago 483.6 MB
openshift/origin latest f31a4bdd54ab 4 hours ago 483.6 MB
openshift/origin-pod latest 547301c0a6f3 4 hours ago 1.591 MB
oc get all
NAME REVISION REPLICAS TRIGGERED BY
docker-registry 1 1 config
router 1 1 config
NAME DESIRED CURRENT AGE
docker-registry-1 0 0 9m
router-1 0 0 9m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.232.74 <none> 5000/TCP 10m
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 10m
router 172.30.138.21 <none> 80/TCP,443/TCP,1936/TCP 10m
NAME READY STATUS RESTARTS AGE
docker-registry-1-deploy 0/1 Error 0 9m
router-1-deploy 0/1 Error 0 9m
findmnt
TARGET SOURCE FSTYPE OPTIONS PROPAGATION
/ /dev/sdb1 btrfs rw,relatime,compress=lzo,ssd,discard,space_cache,subvolid=5,subvol=/ shared
├─/proc proc proc rw,nosuid,nodev,noexec,relatime shared
│ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct shared
├─/sys sys sysfs rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,mode=755 shared
│ │ ├─/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd shared
│ │ ├─/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct shared
│ │ ├─/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio shared
│ │ ├─/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer shared
│ │ ├─/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory shared
│ │ ├─/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices shared
│ │ ├─/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids shared
│ │ ├─/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset shared
│ │ └─/sys/fs/cgroup/net_cls cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls shared
│ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime shared
│ ├─/sys/kernel/debug debugfs debugfs rw,relatime shared
│ └─/sys/kernel/config configfs configfs rw,relatime shared
├─/dev dev devtmpfs rw,nosuid,relatime,size=5998160k,nr_inodes=1499540,mode=755 shared
│ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev shared
│ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 shared
│ ├─/dev/hugepages hugetlbfs hugetlbfs rw,relatime shared
│ └─/dev/mqueue mqueue mqueue rw,relatime shared
├─/run run tmpfs rw,nosuid,nodev,relatime,mode=755 shared
│ ├─/run/user/120 tmpfs tmpfs rw,nosuid,nodev,relatime,size=1200256k,mode=700,uid=120,gid=120 shared
│ └─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatime,size=1200256k,mode=700,uid=1000,gid=1000 shared
│ └─/run/user/1000/gvfs gvfsd-fuse fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 shared
├─/tmp tmpfs tmpfs rw shared
├─/var/lib/origin/openshift.local.volumes/pods/60534ea8-5d3b-11e6-85e3-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-kbidd tmpfs tmpfs rw,relatime shared
├─/var/lib/origin/openshift.local.volumes/pods/605372e0-5d3b-11e6-85e3-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-kbidd tmpfs tmpfs rw,relatime shared
├─/var/lib/origin/openshift.local.volumes/pods/a14b9448-5d43-11e6-9e4c-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-8oavf tmpfs tmpfs rw,relatime shared
├─/var/lib/origin/openshift.local.volumes/pods/a1628f7b-5d43-11e6-9e4c-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-8oavf tmpfs tmpfs rw,relatime shared
├─/var/lib/origin/openshift.local.volumes/pods/332d32e9-5d44-11e6-aa10-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-f1ewo tmpfs tmpfs rw,relatime shared
└─/var/lib/origin/openshift.local.volumes/pods/332f8d09-5d44-11e6-aa10-6c8814bca32c/volumes/kubernetes.io~secret/deployer-token-f1ewo tmpfs tmpfs rw,relatime shared
Also docker version in case it matters
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Tue Jun 21 00:43:14 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Tue Jun 21 00:43:14 2016
OS/Arch: linux/amd64
I'll be installing arch soon to take a look
On Monday, August 8, 2016, Giovanni Condello notifications@github.com
wrote:
Also docker version in case it matters
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Tue Jun 21 00:43:14 2016
OS/Arch: linux/amd64Server:
Version: 1.11.2
API version: 1.23
Go version: go1.6.2
Git commit: b9f10c9
Built: Tue Jun 21 00:43:14 2016
OS/Arch: linux/amd64—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
#10215 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAABYmia5b946RkSRFuM_MyKi4exTqI-ks5qdu8ogaJpZM4JcxhH
.
@ncdc Thanks.
I am also looking into it right now. Can it be connected to: https://github.com/kubernetes/kubernetes.github.io/pull/905/files ?
Currently I have in my docker systemd file:
MountFlags=524288
What does 524288 correspond to?
For me MountFlags is slave
/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/docker daemon -H fd://
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
[Install]
WantedBy=multi-user.target
/lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
PartOf=docker.service
[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
@ncdc Sorry, I don't actually know what it corresponds to :( But in the config I have slave
as well.
$ grep MountFlags /lib/systemd/system/docker.service
MountFlags=slave
$ systemctl show docker | grep MountFlags
MountFlags=524288
Changing MountFlags to shared made no difference for me
The bug has got in between v1.3.0-alpha.2
and v1.3.0-alpha.3
.
I can confirm that:
oc cluster up --version=v1.3.0-alpha.2
- works for me
oc cluster up --version=v1.3.0-alpha.3
- doesn't work for me
Here's what I'm seeing (using oc
from commit aaea4d9):
- The docker unit file has
MountFlags=slave
- The volumes dir in the origin container has a propagation mode of
rprivate
- If I change https://github.com/openshift/origin/blob/master/pkg/bootstrap/docker/openshift/helper.go#L221 so that the mount has
:rslave
at the end, secrets get mounted just fine - If I change it to
:rshared
, the container can't start:
FAIL
Error: could not create OpenShift configuration
Caused By:
Error: cannot start container 34ccdc97845b920bbf3aab4d2027df209d298098cd9a9d2e9522a6923e955606
Caused By:
Error: API error (500): linux mounts: Path /var/lib/origin/openshift.local.volumes is mounted on / but it is not a shared mount.
I'm not sure what specifically changed in the image itself between alpha 2 and 3 that would break things...
More people (@rupalibehera, @kadel) are hitting the issue so I will put my temporary workaround (as suggested by @ncdc) here:
patch for broken line https://github.com/openshift/origin/blob/master/pkg/bootstrap/docker/openshift/helper.go#L221
diff --git a/pkg/bootstrap/docker/openshift/helper.go b/pkg/bootstrap/docker/openshift/helper.go
index a89de30..f5ad470 100644
--- a/pkg/bootstrap/docker/openshift/helper.go
+++ b/pkg/bootstrap/docker/openshift/helper.go
@@ -218,7 +218,7 @@ func (h *Helper) Start(opt *StartOptions, out io.Writer) (string, error) {
env = append(env, "OPENSHIFT_CONTAINERIZED=false")
} else {
binds = append(binds, "/:/rootfs:ro")
- binds = append(binds, fmt.Sprintf("%[1]s:%[1]s", opt.HostVolumesDir))
+ binds = append(binds, fmt.Sprintf("%[1]s:%[1]s:rslave", opt.HostVolumesDir))
}
env = append(env, opt.Environment...)
binds = append(binds, fmt.Sprintf("%s:/var/lib/origin/openshift.local.config:z", opt.HostConfigDir))
I also believe this should be labeled as a BUG, not a question.
What's the status on this?
So here is what k8s v1.2.x was doing...
The secret volume plugin writes with the volumeHost.writer:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/pkg/volume/secret/secret.go#L188
which comes from the kubelet:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/pkg/kubelet/volumes.go#L106
which is set as the NsenterWriter when running containerized:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/cmd/kubelet/app/server.go#L136-L140
In v1.3.0, the secret volume writer is created independently from the Kubelet writer:
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/secret/secret.go#L198-L199
I don't know enough about the secret writer rewrite if this was intentional... which is where we need @pmorie's help
Breaking containerized origin is bad.
On Wed, Aug 17, 2016 at 6:42 PM, Cesar Wong notifications@github.com
wrote:
So here is what k8s v1.2.x was doing...
The secret volume plugin writes with the volumeHost.writer:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/pkg/
volume/secret/secret.go#L188
which comes from the kubelet:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/pkg/
kubelet/volumes.go#L106
which is set as the NsenterWriter when running containerized:
https://github.com/kubernetes/kubernetes/blob/v1.2.6/cmd/
kubelet/app/server.go#L136-L140In v1.3.0, the secret volume writer is created independently from the
Kubelet writer:
https://github.com/kubernetes/kubernetes/blob/master/pkg/
volume/secret/secret.go#L198-L199I don't know enough about the secret writer rewrite if this was
intentional... which is where we need @pmorie https://github.com/pmorie's
help—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#10215 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p-6dyLyX094zGwNt-5OinxVDp6Uyks5qg45BgaJpZM4JcxhH
.
Looking at this now.
1.12 just hit Arch's repositories. I'll do the test with both versions actually.
@pmorie Here are my results:
- Is the secret data present on the host fs?
I don't think so. I believe this has been established by @ncdc finding out the propagation mode is rprivate (#10215 (comment)).
/var/lib/origin/openshift.local.volumes/
├── plugins
└── pods
├── 779230e6-65e8-11e6-ad25-1002b5b31771
│ ├── containers
│ │ └── deployment
│ │ └── 87dfddf3
│ ├── etc-hosts
│ ├── plugins
│ │ └── kubernetes.io~empty-dir
│ │ └── wrapped_deployer-token-9cpko
│ │ └── ready
│ └── volumes
│ └── kubernetes.io~secret
│ └── deployer-token-9cpko
└── 77926134-65e8-11e6-ad25-1002b5b31771
├── containers
│ └── deployment
│ └── 0ab3a332
├── etc-hosts
├── plugins
│ └── kubernetes.io~empty-dir
│ └── wrapped_deployer-token-9cpko
│ └── ready
└── volumes
└── kubernetes.io~secret
└── deployer-token-9cpko
- Is the secret data present in the origin container fs?
Seems like it.
/var/lib/origin/openshift.local.volumes/
|-- plugins
`-- pods
|-- 779230e6-65e8-11e6-ad25-1002b5b31771
| |-- containers
| | `-- deployment
| | `-- 87dfddf3
| |-- etc-hosts
| |-- plugins
| | `-- kubernetes.io~empty-dir
| | `-- wrapped_deployer-token-9cpko
| | `-- ready
| `-- volumes
| `-- kubernetes.io~secret
| `-- deployer-token-9cpko
| |-- ca.crt -> ..data/ca.crt
| |-- namespace -> ..data/namespace
| |-- service-ca.crt -> ..data/service-ca.crt
| `-- token -> ..data/token
`-- 77926134-65e8-11e6-ad25-1002b5b31771
|-- containers
| `-- deployment
| `-- 0ab3a332
|-- etc-hosts
|-- plugins
| `-- kubernetes.io~empty-dir
| `-- wrapped_deployer-token-9cpko
| `-- ready
`-- volumes
`-- kubernetes.io~secret
`-- deployer-token-9cpko
|-- ca.crt -> ..data/ca.crt
|-- namespace -> ..data/namespace
|-- service-ca.crt -> ..data/service-ca.crt
`-- token -> ..data/token
Okay, so after some digging, I found that:
- Technically @csrwng is not wrong -- the switch from using
NsenterWriter
toioutil.WriteFile
is what caused this issue to surface - However! NsenterWriter is basically a hack from the time before docker allowed us to set propagation mode on bind-mounts.
- The fix we should make is to set the propagation mode on the rootfs volume in
oc cluster up
-- I'd really like to remove NsenterWriter upstream.
@pmorie So the issue is that if we do that we'd be dropping support for Docker 1.9. Are we ready to do that?
We could mitigate the issue by only using containerized mode if running with Docker < 1.10
@smarterclayton any opinion on this matter? I think making nsenter-based approaches work with AtomicWriter
is going to be fraught w/ peril
This is effectively the same thing as removing 1.9 support for containerized.
Actually 1.9 works in Red Hat Distros (Fedora and RHEL).
I've submitted a fix to cluster up to always use shared volumes except if Docker is 1.9, in which case it will determine whether you can actually run containerized or not.
#10552