openshift/origin

Docker 1.7 cannot mount secrets

liggitt opened this issue · 142 comments

When we started using secrets for deployments, we noticed that containers are not able to read mounted secrets.

The pod definitions contain Volume and VolumeMount definintions, and docker inspect shows the volumes as expected, but the container cannot read files from the mount point.

This surfaces (in the case of the deployer pod) as this error:

F0610 18:32:48.935073       1 deployer.go:65] User "system:anonymous" cannot get replicationcontrollers in project "myproject"

docker inspect <container> shows the volume mount:

...
        "Env": [
...
            "BEARER_TOKEN_FILE=/var/run/secrets/kubernetes.io/serviceaccount/token",
...
    "HostConfig": {
        "Binds": [
            "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw:/var/run/secrets/kubernetes.io/serviceaccount:ro",
...
        ],
...
    "Volumes": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw"
    },
    "VolumesRW": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": false
    },
    "VolumesRelabel": {
...
        "/var/run/secrets/kubernetes.io/serviceaccount": "ro"
    }
...

@csrwng @smarterclayton was there a fix for the boot2docker tmpfs issue?

The containerized flag in the kubelet should allow you to mount.

On Jun 10, 2015, at 5:08 PM, Jordan Liggitt notifications@github.com wrote:

@csrwng @smarterclayton was there a fix for the boot2docker tmpfs issue?


Reply to this email directly or view it on GitHub.

curious that the kubelet doesn't complain about creating the mount

What is that ~ in kubernetes.io~secret?

"/var/run/secrets/kubernetes.io/serviceaccount": "/openshift.local.volumes/pods/12f168c2-0fad-11e5-a1f9-525400553cbb/volumes/kubernetes.io~secret/deployer-token-2jxjw"

Looks like this is what is keeping me from docker pulling latest and having the build successfully publish to the registry. My happy path dev exp based on docker launched origin isn't happy :(

I have a todo to fix this - basically we need to set the containerized flag and then add it to the e2e tests so it doesn't break.

----- Original Message -----

Looks like this is what is keeping me from docker pulling latest and having
the build successfully publish to the registry. My happy path dev exp based
on docker launched origin isn't happy :(


Reply to this email directly or view it on GitHub:
#3072 (comment)

Any workaround available for this, until it's fixed for good?

You have to write out a node config file and then set a kubeletArguments of "containerized" with "true" as the argument (you need to specify it as a nested string array in the yaml - kubeletArgument is map [string] -> []string)

kubeletArguments:
containerized:

  • true

----- Original Message -----

Any workaround available for this, until it's fixed for good?


Reply to this email directly or view it on GitHub:
#3072 (comment)

I can't write a difference node config, as it is created when the container starts :)
Any plan to update the v0.6 docker image with this?
Thanks

It'll probably be in 0.6.1

----- Original Message -----

I can't write a difference node config, as it is created when the container
starts :)
Any plan to update the v0.6 docker image with this?
Thanks


Reply to this email directly or view it on GitHub:
#3072 (comment)

Ok thanks. The sooner the better, we're stuck with this :)

Try #3112 - you'll need to build your own openshift/origin image from the branch with hack/build-release.sh and then hack/build-images.sh. Still testing myself.

----- Original Message -----

Ok thanks. The sooner the better, we're stuck with this :)


Reply to this email directly or view it on GitHub:
#3072 (comment)

👍 will test that. Thanks!

Rah, I can't compile a new version using boot2docker:

++ Building go targets for linux/amd64: cmd/openshift
        # github.com/openshift/origin/cmd/openshift
/usr/lib/golang/pkg/tool/linux_amd64/6l: running gcc failed: Cannot allocate memory

I could raise the memory on vbox, but this implies destroying the current vm, and I can't remove everything... I will wait for your tests then. Let me know if the new image can be pulled from somewhere.
Thanks

I have just rebuilt the image from master, and the registry won't start either:

W0612 18:56:12.839622       1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exit status 1
E0612 18:56:17.800696       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
E0612 18:56:17.817346       1 pod_workers.go:108] Error syncing pod 79031390-1134-11e5-874a-8277bc1719bf, skipping: exit status 1

I'm running openshift with:

$ docker run -d -name "origin" \
 --privileged --net=host \
 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw \
 openshift/origin start --public-master=$(boot2docker ip)
$ docker run -it --rm openshift/origin:latest version
openshift v0.6-179-gcc71b54
kubernetes v0.17.1-804-g496be63

Should I open another issue?

Can you repro with --loglevel=5 and look for the same log line? It should print the mount output.

Did you rebuild the base images as well?

On Jun 12, 2015, at 2:58 PM, Philippe Lafoucrière notifications@github.com wrote:

I have just rebuilt the image from master, and the registry won't start either:

W0612 18:56:12.839622 1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exit status 1
E0612 18:56:17.800696 1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
E0612 18:56:17.817346 1 pod_workers.go:108] Error syncing pod 79031390-1134-11e5-874a-8277bc1719bf, skipping: exit status 1
I'm running openshift with:

$ docker run -d -name "origin"
--privileged --net=host
-v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw
openshift/origin start --public-master=$(boot2docker ip)
$ docker run -it --rm openshift/origin:latest version
openshift v0.6-179-gcc71b54
kubernetes v0.17.1-804-g496be63
Should I open another issue?


Reply to this email directly or view it on GitHub.

I0612 20:06:36.080128       1 empty_dir_linux.go:38] Determining mount medium of /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82
I0612 20:06:36.092700       1 nsenter_mount.go:139] findmnt command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/findmnt -o target --noheadings --target /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:06:36.133421       1 empty_dir_linux.go:48] Statfs_t of %v: %+v/var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82{1635083891 4096 4762473 3929050 3681369 1218224 1171153 {[0 0]} 242 4096 4128 [0 0 0 0]}
I0612 20:06:36.133538       1 docker.go:321] Docker Container: /origin is not managed by kubelet.
I0612 20:06:36.133534       1 empty_dir.go:202] pod 873a34bc-113e-11e5-bf0e-22549f88d0e6: mounting tmpfs for volume not-used with opts []
I0612 20:06:36.133580       1 nsenter_mount.go:79] nsenter Mounting tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82 tmpfs []
I0612 20:06:36.133612       1 nsenter_mount.go:82] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:06:36.176765       1 nsenter_mount.go:86] Output from mount command: nsenter: failed to execute /usr/bin/mount: No such file or directory
E0612 20:06:36.176972       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1; skipping pod
I0612 20:06:36.177006       1 kubelet.go:2051] Generating status for "docker-registry-1-deploy_default"
I0612 20:06:36.177512       1 server.go:569] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-deploy", UID:"873a34bc-113e-11e5-bf0e-22549f88d0e6", APIVersion:"v1", ResourceVersion:"182", FieldPath:""}): reason: 'failedMount' Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 1
I0612 20:06:36.196478       1 kubelet.go:1990] pod waiting > 0, pending
E0612 20:06:36.196881       1 pod_workers.go:108] Error syncing pod 873a34bc-113e-11e5-bf0e-22549f88d0e6, skipping: exit status 1
I0612 20:06:36.197236       1 server.go:569] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"docker-registry-1-deploy", UID:"873a34bc-113e-11e5-bf0e-22549f88d0e6", APIVersion:"v1", ResourceVersion:"182", FieldPath:""}): reason: 'failedSync' Error syncing pod, skipping: exit status 1

Relevent part I think: Output from mount command: nsenter: failed to execute /usr/bin/mount: No such file or directory

mount is available in the container:

[root@boot2docker openshift]# ls /usr/bin/mount
/usr/bin/mount
[root@boot2docker openshift]# which mount
/usr/bin/mount

If it's the host mount command, it's /bin/mount, not /usr/bin/mount. You should of course use which mount instead of absolute paths.

after aliasing mount on the host, I have this:

I0612 20:17:48.342643       1 nsenter_mount.go:82] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:17:48.353253       1 nsenter_mount.go:86] Output from mount command: mount: mounting tmpfs on /var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82 failed: No such file or directory
E0612 20:17:48.353584       1 kubelet.go:1111] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 255; skipping pod

Hrm, the command line mounts should have handled that. Try /var/lib/openshift:/var/lib/openshift just be sure

----- Original Message -----

after aliasing mount on the host, I have this:

I0612 20:17:48.342643       1 nsenter_mount.go:82] Mount command: nsenter
[--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs
/var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82]
I0612 20:17:48.353253       1 nsenter_mount.go:86] Output from mount command:
mount: mounting tmpfs on
/var/lib/openshift/openshift.local.volumes/pods/873a34bc-113e-11e5-bf0e-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-wjx82
failed: No such file or directory
E0612 20:17:48.353584       1 kubelet.go:1111] Unable to mount volumes for
pod "docker-registry-1-deploy_default": exit status 255; skipping pod

Reply to this email directly or view it on GitHub:
#3072 (comment)

back to square 1 :(

[root@boot2docker openshift]# oc get pods
NAME                       READY     REASON         RESTARTS   AGE
docker-registry-1-deploy   0/1       ExitCode:255   0          26s
[root@boot2docker openshift]# oc logs docker-registry-1-deploy
F0612 21:20:31.562161       1 deployer.go:65] User "system:anonymous" cannot get replicationcontrollers in project "default"

If the volume doesn't mount secrets can't be loaded, which means deployers don't know what's going on. The -v didn't change your issue?

----- Original Message -----

back to square 1 :(

[root@boot2docker openshift]# oc get pods
NAME                       READY     REASON         RESTARTS   AGE
docker-registry-1-deploy   0/1       ExitCode:255   0          26s
[root@boot2docker openshift]# oc logs docker-registry-1-deploy
F0612 21:20:31.562161       1 deployer.go:65] User "system:anonymous" cannot
get replicationcontrollers in project "default"

Reply to this email directly or view it on GitHub:
#3072 (comment)

I guess the volume mounting is working, but there's probably an issue elsewhere. I'm not even sure what to look for, sorry for that :(

I0612 21:20:39.533361       1 nsenter_mount.go:79] nsenter Mounting tmpfs /var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9 tmpfs []
I0612 21:20:39.533432       1 nsenter_mount.go:82] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs /var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9]
I0612 21:20:39.547123       1 secret.go:135] Received secret default/deployer-token-awbf9 containing (1) pieces of data, 850 total bytes
I0612 21:20:39.547234       1 secret.go:140] Writing secret data default/deployer-token-awbf9/token (850 bytes) to host file /var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9/token
I0612 21:20:39.547467       1 kubelet.go:2051] Generating status for "docker-registry-1-deploy_default"

Hrm, will need to dig in more. The mount command should definitely be added.

----- Original Message -----

I guess the volume mounting is working, but there's probably an issue
elsewhere. I'm not even sure what to look for, sorry for that :(

I0612 21:20:39.533361       1 nsenter_mount.go:79] nsenter Mounting tmpfs
/var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9
tmpfs []
I0612 21:20:39.533432       1 nsenter_mount.go:82] Mount command: nsenter
[--mount=/rootfs/proc/1/ns/mnt -- /usr/bin/mount -t tmpfs -o  tmpfs
/var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9]
I0612 21:20:39.547123       1 secret.go:135] Received secret
default/deployer-token-awbf9 containing (1) pieces of data, 850 total bytes
I0612 21:20:39.547234       1 secret.go:140] Writing secret data
default/deployer-token-awbf9/token (850 bytes) to host file
/var/lib/openshift/openshift.local.volumes/pods/d9c8549e-1148-11e5-8609-22549f88d0e6/volumes/kubernetes.io~secret/deployer-token-awbf9/token
I0612 21:20:39.547467       1 kubelet.go:2051] Generating status for
"docker-registry-1-deploy_default"

Reply to this email directly or view it on GitHub:
#3072 (comment)

ok. Please ping me if you want to share a screen. I really want to move forward on this, and I suspect you don't have a Mac near you to test ;)

Sent from my mac... :) but I'm running the vagrant vm, not boot2docker, so slightly different.

----- Original Message -----

ok. Please ping me if you want to share a screen on this. I really want to
move forward on this, and I suspect you don't have a Mac near you to test ;)


Reply to this email directly or view it on GitHub:
#3072 (comment)

Ok, no problem :)
I will fall back to vagrant until this is fixed.
Thanks again for your time and patience.

While you are fixing paths, I often see this in logs:

W0613 15:14:11.130391       1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exec: "pidof": executable file not found in $PATH

Thanks :)

That's fixed in the latest images.

On Jun 13, 2015, at 11:15 AM, Philippe Lafoucrière notifications@github.com wrote:

While you are fixing paths, I often see this in logs:

W0613 15:14:11.130391 1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exec: "pidof": executable file not found in $PATH
Thanks :)


Reply to this email directly or view it on GitHub.

You'll need to rebuild the base image though - hack/build-base-images.sh unless you pull a newer openshift/origin-base

On Jun 13, 2015, at 11:15 AM, Philippe Lafoucrière notifications@github.com wrote:

While you are fixing paths, I often see this in logs:

W0613 15:14:11.130391 1 container_manager_linux.go:68] [ContainerManager] Failed to ensure Docker is in a container: failed to find pid of Docker container: exec: "pidof": executable file not found in $PATH
Thanks :)


Reply to this email directly or view it on GitHub.

Hmm, I just tested on a fresh boot2docker instance (so no image, cache present).
I have started the container:

$ docker run -d -name "origin" \
 --privileged --net=host \
 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker:/var/lib/docker:rw \
 openshift/origin start --loglevel=5 --public-master=$(boot2docker ip)

logged in into the container, create the registry:

oadm registry --credentials=./openshift.local.config/master/openshift-registry.kubeconfig
deploymentconfigs/docker-registry
services/docker-registry

but the registry won't start:

[root@boot2docker openshift]# oc status
In project default

service docker-registry (172.30.227.70:5000)
  docker-registry deploys docker.io/openshift/origin-docker-registry:v0.6
    #1 deployment failed 2 minutes ago

service kubernetes (172.30.0.2:443)

service kubernetes-ro (172.30.0.1:80)

To see more information about a Service or DeploymentConfig, use 'oc describe service <name>' or 'oc describe dc <name>'.
You can use 'oc get all' to see lists of each of the types described above.
[root@boot2docker openshift]# oc get pods
NAME                       READY     REASON    RESTARTS   AGE
docker-registry-1-deploy   0/1       Pending   0          31s
[root@boot2docker openshift]# oc logs docker-registry-1-deploy
F0614 01:17:55.128198       1 deployer.go:65] User "system:anonymous" cannot get replicationcontrollers in project "default"

Still the same error :(
Sorry!

It seems like the latest image is still the same to me:

$ docker run -it --rm openshift/origin:latest version
openshift v0.6-160-gdfb6736
kubernetes v0.17.1-804-g496be63

Looks like something wrong with how mounts work in boot2docker, or how the mount is being invoked. @pmorie are you aware of reasons that multiple mounts would be created on the same endpoint? This is not happening in vagrant fedora, only boot2docker.

tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)
tmpfs on /var/lib/openshift/openshift.local.volumes/pods/0972d2ff-12c9-11e5-91f5-4eb0eca2f9ab/volumes/kubernetes.io~secret/default-token-fwnql type tmpfs (rw,relatime)

Is this a different kind of mount shadowing happening?

where do you see this? Inside the openshift container?
I don't have anything like this:

[root@boot2docker openshift]# mount | grep tmpfs
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /rootfs type tmpfs (ro,relatime,size=1847076k)
cgroup on /rootfs/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
tmpfs on /rootfs/dev/shm type tmpfs (rw,relatime)
tmpfs on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev type tmpfs (rw,nosuid,mode=755)
shm on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
tmpfs on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev type tmpfs (rw,nosuid,mode=755)
shm on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/rootfs type tmpfs (ro,relatime,size=1847076k)
cgroup on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/rootfs/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
tmpfs on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/rootfs/dev/shm type tmpfs (rw,relatime)
tmpfs on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/rootfs/mnt/sda1/var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev type tmpfs (rw,nosuid,mode=755)
shm on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/rootfs/mnt/sda1/var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
cgroup on /var/lib/docker/aufs/mnt/7681e3a427f0bd6f878fb202ec567b53ab902e9e3a2b7d9a4d87be5d606bf0fa/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
tmpfs on /var/lib/openshift/openshift.local.volumes type tmpfs (rw,relatime,size=1847076k)
tmpfs on /run type tmpfs (rw,relatime,size=1847076k)

Ok, reproduced...

To be clear, this is not a Mac issue, this is a boot2docker issue. Something about how we mount on the distro of boot2docker (which looks Debianish) is not correct.

I tried the workaround, writing node config, and getting an error with suggestion:

[start_allinone.go:89] OpenShift could not start: could not load config file "/sspeiche/oo-hack/node-boot2docker/node-config.yaml" due to a error: json: cannot unmarshal bool into Go value of type string

Interesting I'm getting a json error message

Here's my complete node config

allowDisabledDocker: true
apiVersion: v1
dnsDomain: cluster.local
dnsIP: 10.0.2.15
dockerConfig:
  execHandlerName: native
imageConfig:
  format: openshift/origin-${component}:${version}
  latest: true
kind: NodeConfig
masterKubeConfig: node.kubeconfig
networkPluginName: ""
nodeName: boot2docker
podManifestConfig: null
servingInfo:
  bindAddress: 0.0.0.0:10250
  certFile: server.crt
  clientCA: node-client-ca.crt
  keyFile: server.key
volumeDirectory: /var/lib/openshift/openshift.local.volumes
kubeletArguments:
  containerized:
  - true

Quote "true"?

Thanks @liggitt , still interesting json error message about my yaml

Yaml is trying to help you by turning true into a Boolean. You'll need to put quotes around it.

On Jun 16, 2015, at 4:10 PM, Steve Speicher notifications@github.com wrote:

I tried the workaround, writing node config, and getting an error with suggestion:

[start_allinone.go:89] OpenShift could not start: could not load config file "/sspeiche/oo-hack/node-boot2docker/node-config.yaml" due to a error: json: cannot unmarshal bool into Go value of type string
Interesting I'm getting a json error message

Here's my complete node config

allowDisabledDocker: true
apiVersion: v1
dnsDomain: cluster.local
dnsIP: 10.0.2.15
dockerConfig:
execHandlerName: native
imageConfig:
format: openshift/origin-${component}:${version}
latest: true
kind: NodeConfig
masterKubeConfig: node.kubeconfig
networkPluginName: ""
nodeName: boot2docker
podManifestConfig: null
servingInfo:
bindAddress: 0.0.0.0:10250
certFile: server.crt
clientCA: node-client-ca.crt
keyFile: server.key
volumeDirectory: /var/lib/openshift/openshift.local.volumes
kubeletArguments:
containerized:

  • true

    Reply to this email directly or view it on GitHub.

set containerized to true, started with node config, don't see any different behavior (ie build pod is stuck in pending state)

If it helps, here's the output of docker inspect https://gist.github.com/sspeiche/efe0232a36a967e94fd9

Log keeps repeating this:

E0616 20:38:05.622475    4589 pod_workers.go:108] Error syncing pod bce93cb7-1466-11e5-9624-a6c96a3fa141, skipping: exit status 255
E0616 20:38:05.626925    4589 pod_workers.go:108] Error syncing pod 5dbbdbbe-1466-11e5-9624-a6c96a3fa141, skipping: exit status 255
E0616 20:38:15.578332    4589 kubelet.go:1114] Unable to mount volumes for pod "docker-registry-1-deploy_default": exit status 255; skipping pod
E0616 20:38:15.823758    4589 kubelet.go:1114] Unable to mount volumes for pod "router-1-deploy_default": exit status 255; skipping pod
E0616 20:38:15.866266    4589 kubelet.go:1114] Unable to mount volumes for pod "nodejs-example-2-build_nodejs": exit status 255; skipping pod

@sspeiche You can use the v0.5.3 docker image until it's fixed. You'll have to use osc instead of oc, osadm instead of oadm, but every basic feature is already present.

Quick update with openshift 1.0.0 and boot2docker 1.7.0:

[root@boot2docker openshift]# oc logs docker-registry-1-deploy
E0620 15:47:32.761985       1 clientcmd.go:128] Error reading BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
E0620 15:47:32.762772       1 clientcmd.go:146] Error reading BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
F0620 15:47:32.889223       1 deployer.go:63] couldn't get deployment default/docker-registry-1: User "system:anonymous" cannot get replicationcontrollers in project "default"
[root@boot2docker openshift]#

Error messages are cleaner, that's the good news!

:). Paul will hopefully be able to take a look at this soon - we got bogged down in debugging some other secret and mounting related issues. It's almost certainly something simple (a difference in behavior in mount on the boot2docker ISO)

On Jun 20, 2015, at 11:50 AM, Philippe Lafoucrière notifications@github.com wrote:

Quick update with openshift 1.0.0 and boot2docker 1.7.0:

[root@boot2docker openshift]# oc logs docker-registry-1-deploy
E0620 15:47:32.761985 1 clientcmd.go:128] Error reading BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
E0620 15:47:32.762772 1 clientcmd.go:146] Error reading BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token": open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
F0620 15:47:32.889223 1 deployer.go:63] couldn't get deployment default/docker-registry-1: User "system:anonymous" cannot get replicationcontrollers in project "default"
[root@boot2docker openshift]#
Error messages are cleaner, that's the good news!


Reply to this email directly or view it on GitHub.

This is near the top of my list.
On Sat, Jun 20, 2015 at 12:50 PM Clayton Coleman notifications@github.com
wrote:

:). Paul will hopefully be able to take a look at this soon - we got
bogged down in debugging some other secret and mounting related issues.
It's almost certainly something simple (a difference in behavior in mount
on the boot2docker ISO)

On Jun 20, 2015, at 11:50 AM, Philippe Lafoucrière <
notifications@github.com> wrote:

Quick update with openshift 1.0.0 and boot2docker 1.7.0:

[root@boot2docker openshift]# oc logs docker-registry-1-deploy
E0620 15:47:32.761985 1 clientcmd.go:128] Error reading
BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token":
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or
directory
E0620 15:47:32.762772 1 clientcmd.go:146] Error reading
BEARER_TOKEN_FILE "/var/run/secrets/kubernetes.io/serviceaccount/token":
open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or
directory
F0620 15:47:32.889223 1 deployer.go:63] couldn't get deployment
default/docker-registry-1: User "system:anonymous" cannot get
replicationcontrollers in project "default"
[root@boot2docker openshift]#
Error messages are cleaner, that's the good news!


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#3072 (comment).

@gravis

What is that ~ in kubernetes.io~secret

That whole token is the 'escaped and qualified' name of the plugin for writing paths onto disk. You'll see that type of token in the path for all kubernetes / openshift volumes on the actual host disk.

@sspeiche @gravis Could one of you reproduce this at loglevel=5 and gist the openshift log? I'm looking for this log message:

https://github.com/openshift/origin/blob/master/Godeps/_workspace/src/github.com/GoogleCloudPlatform/kubernetes/pkg/util/mount/nsenter_mount.go#L100

There you go:

I0628 18:51:04.710779       1 nsenter_mount.go:100] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t tmpfs tmpfs /var/lib/openshift/openshift.local.volumes/pods/9c7f6b9a-1dc6-11e5-8c54-160bdaa26acb/volumes/kubernetes.io~secret/deployer-token-o5uli]
I0628 18:52:21.998439       1 nsenter_mount.go:100] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t tmpfs tmpfs /var/lib/openshift/openshift.local.volumes/pods/9c7f6b9a-1dc6-11e5-8c54-160bdaa26acb/volumes/kubernetes.io~secret/deployer-token-o5uli]
I0628 18:52:25.810327       1 nsenter_mount.go:100] Mount command: nsenter [--mount=/rootfs/proc/1/ns/mnt -- /bin/mount -t tmpfs tmpfs /var/lib/openshift/openshift.local.volumes/pods/9c7f6b9a-1dc6-11e5-8c54-160bdaa26acb/volumes/kubernetes.io~secret/deployer-token-o5uli]

From: https://gist.github.com/gravis/b03932768af368d46b35

Hrm. It looks from the log like the mounts are succeeding. I see the kubelet fetching the secret data and laying it down on disk, too.

@gravis You're still running into the problem with the registry not starting? Could you gist the registry logs?

From debugging w/ @gravis on IRC, the problem is no longer that the registry isn't starting -- it's that the deploy container is having a problem with the credentials it's supposed to use to query the master:

couldn't get deployment default/docker-registry-1: User "system:anonymous" cannot get replicationcontrollers in project "default"

is there a message about the bearer token file not being able to be read?

There are multiple mounts in the case I saw, where it looked like the later
mounts were hiding / overriding the earlier mounts (the ones that had the
secret in them)

On Jun 28, 2015, at 3:39 PM, Paul Morie notifications@github.com wrote:

Hrm. It looks from the log like the mounts are succeeding. I see the
kubelet fetching the secret data and laying it down on disk, too.


Reply to this email directly or view it on GitHub
#3072 (comment).

pod descriptor: https://gist.github.com/gravis/e37464cea987be61e138

@smarterclayton no multiple mounts in this case

and as said on irc, I have the same error with any other pod, it's not related to the deploy of the registry.

Can you run mount inside the container to see actual mounts (not pod mount definitions)

none on / type aufs (rw,relatime,si=300bce299df0725,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
tmpfs on /rootfs type tmpfs (ro,relatime,size=1845380k)
proc on /rootfs/proc type proc (rw,relatime)
sysfs on /rootfs/sys type sysfs (rw,relatime)
fusectl on /rootfs/sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /rootfs/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /rootfs/sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /rootfs/sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /rootfs/sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /rootfs/sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /rootfs/sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /rootfs/sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /rootfs/sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /rootfs/sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /rootfs/sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /rootfs/sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
devpts on /rootfs/dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
tmpfs on /rootfs/dev/shm type tmpfs (rw,relatime)
/dev/sda1 on /rootfs/mnt/sda1 type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /rootfs/mnt/sda1/var/lib/docker/aufs type ext4 (rw,relatime,data=ordered)
none on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd type aufs (rw,relatime,si=300bce299df0725,dio,dirperm1)
none on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd type aufs (rw,relatime,si=300bce299df0725,dio,dirperm1)
proc on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev type tmpfs (rw,nosuid,mode=755)
devpts on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys type sysfs (ro,relatime)
fusectl on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /rootfs/mnt/sda1/var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
none on /rootfs/Users type vboxsf (rw,nodev,relatime)
nsfs on /rootfs/var/run/docker/netns/default type nsfs (rw)
tmpfs on /run type tmpfs (rw,relatime,size=1845380k)
nsfs on /run/docker/netns/default type nsfs (rw)
/dev/sda1 on /var/lib/docker type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /var/lib/docker/aufs type ext4 (rw,relatime,data=ordered)
none on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd type aufs (rw,relatime,si=300bce299df0725,dio,dirperm1)
none on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd type aufs (rw,relatime,si=300bce299df0725,dio,dirperm1)
proc on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev type tmpfs (rw,nosuid,mode=755)
devpts on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys type sysfs (ro,relatime)
fusectl on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
tmpfs on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs type tmpfs (ro,relatime,size=1845380k)
proc on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/proc type proc (rw,relatime)
sysfs on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/sys type sysfs (rw,relatime)
fusectl on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/sys/fs/fuse/connections type fusectl (rw,relatime)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /var/lib/docker/aufs/mnt/a79dea0bcac0d5c6fee88605ecf7d77ca38ea8a6a41405794eaf42abe5d788fd/rootfs/sys/fs/c

from the "origin" container of course, the deploy container is Exited, I can't run commands in it.

Can run any container like busybox to test, isn't specific to the deployer

Could you be more specific please?
Is it what you want:

$ docker run -it --rm busybox mount
none on / type aufs (rw,relatime,si=300bce29585b725,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
devpts on /dev/console type devpts (rw,relatime,mode=600,ptmxmode=000)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,mode=755)
tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,mode=755)

?

Sorry… create a pod using an image like busybox, then exec into it and run mount

I can't create pods, that's the point :(
They fail with the error couldn't get deployment myproject/redis-master-1: User "system:anonymous" cannot get replicationcontrollers in project "myproject"

Not a deployment, a direct pod. I'm on mobile at the moment, I can get you an example pod.json in a bit

Deployment pods are failing because they require mounted secrets to talk to the API. If we can create a pod that doesn't require that to run (like busybox) we can get in and debug the issue with the mounted secret

@gravis I suggest running the upstream secrets example - it doens't rely on
the deployment festure and should help us diagnose whether the issues is
the secrets volume or something else.

https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/secrets

On Sunday, June 28, 2015, Jordan Liggitt notifications@github.com wrote:

Deployment pods are failing because they require mounted secrets to talk
to the API. If we can create a pod that doesn't require that to run (like
busybox) we can get in and debug the issue with the mounted secret


Reply to this email directly or view it on GitHub
#3072 (comment).

Or you can try creating a pod from this pod definition, then view the resulting logs:
http://fpaste.org/237488/55424411/raw/

oc create -f pod.json
oc logs test
$ oc create -f pod.json
pods/test
$ oc logs test
mount
none on / type aufs (rw,relatime,si=300bce28813f725,dio,dirperm1)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
tmpfs on /dev/termination-log type tmpfs (rw,relatime,size=1845380k)
tmpfs on /tmp/secrets/kubernetes.io/serviceaccount type tmpfs (ro,relatime)
/dev/sda1 on /etc/resolv.conf type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hostname type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /etc/hosts type ext4 (rw,relatime,data=ordered)
proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/fs type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/irq type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sys type proc (ro,nosuid,nodev,noexec,relatime)
proc on /proc/sysrq-trigger type proc (ro,nosuid,nodev,noexec,relatime)
tmpfs on /proc/kcore type tmpfs (rw,nosuid,mode=755)
tmpfs on /proc/timer_stats type tmpfs (rw,nosuid,mode=755)
/:
total 56
drwxr-xr-x   24 root     root          4096 Jun 29 02:23 .
drwxr-xr-x   24 root     root          4096 Jun 29 02:23 ..
-rwxr-xr-x    1 root     root             0 Jun 29 02:23 .dockerenv
-rwxr-xr-x    1 root     root             0 Jun 29 02:23 .dockerinit
drwxrwxr-x    2 root     root          4096 May 22  2014 bin
drwxr-xr-x    5 root     root           380 Jun 29 02:23 dev
drwxr-xr-x    6 root     root          4096 Jun 29 02:23 etc
drwxrwxr-x    4 root     root          4096 May 22  2014 home
drwxrwxr-x    2 root     root          4096 May 22  2014 lib
lrwxrwxrwx    1 root     root             3 May 22  2014 lib64 -> lib
lrwxrwxrwx    1 root     root            11 May 22  2014 linuxrc -> bin/busybox
drwxrwxr-x    2 root     root          4096 Feb 27  2014 media
drwxrwxr-x    2 root     root          4096 Feb 27  2014 mnt
drwxrwxr-x    2 root     root          4096 Feb 27  2014 opt
dr-xr-xr-x  100 root     root             0 Jun 29 02:23 proc
drwx------    2 root     root          4096 Feb 27  2014 root
lrwxrwxrwx    1 root     root             3 Feb 27  2014 run -> tmp
drwxr-xr-x    2 root     root          4096 May 22  2014 sbin
dr-xr-xr-x   13 root     root             0 Jun 29 02:23 sys
drwxrwxrwt    4 root     root          4096 Jun 29 02:23 tmp
drwxrwxr-x    6 root     root          4096 May 22  2014 usr
drwxrwxr-x    4 root     root          4096 May 22  2014 var
/var:
total 16
drwxrwxr-x    4 root     root          4096 May 22  2014 .
drwxr-xr-x   24 root     root          4096 Jun 29 02:23 ..
lrwxrwxrwx    1 root     root             6 Feb 27  2014 cache -> ../tmp
drwxrwxr-x    3 root     root          4096 May 22  2014 lib
lrwxrwxrwx    1 root     root             6 Feb 27  2014 lock -> ../tmp
lrwxrwxrwx    1 root     root             6 Feb 27  2014 log -> ../tmp
lrwxrwxrwx    1 root     root             6 Feb 27  2014 pcmcia -> ../tmp
lrwxrwxrwx    1 root     root             6 Feb 27  2014 run -> ../tmp
lrwxrwxrwx    1 root     root             6 Feb 27  2014 spool -> ../tmp
lrwxrwxrwx    1 root     root             6 Feb 27  2014 tmp -> ../tmp
drwxr-xr-x    2 www-data www-data      4096 May 22  2014 www
/var/run:
lrwxrwxrwx    1 root     root             6 Feb 27  2014 /var/run -> ../tmp
/var/run/secrets:
total 12
drwxr-xr-x    3 root     root          4096 Jun 29 02:23 .
drwxrwxrwt    4 root     root          4096 Jun 29 02:23 ..
drwxr-xr-x    3 root     root          4096 Jun 29 02:23 kubernetes.io
cat: can't open '/var/run/secrets/kubernetes.io/serviceaccount/token': No such file or directory

I have replaced the last command of the pod by a find in /var/run/secrets:

[...]
/var/run/secrets:
total 12
drwxr-xr-x    3 root     root          4096 Jun 29 02:25 .
drwxrwxrwt    4 root     root          4096 Jun 29 02:25 ..
drwxr-xr-x    3 root     root          4096 Jun 29 02:25 kubernetes.io
/var/run/secrets/kubernetes.io/
/var/run/secrets/kubernetes.io/serviceaccount

@gravis

Just for kicks, what's at /tmp/secrets/kubernetes.io/serviceaccount/ ?

On Mon, Jun 29, 2015 at 7:59 AM, Philippe Lafoucrière <
notifications@github.com> wrote:

I have replaced the last command of the pod by a find in /var/run/secrets:

[...]
/var/run/secrets:
total 12
drwxr-xr-x 3 root root 4096 Jun 29 02:25 .
drwxrwxrwt 4 root root 4096 Jun 29 02:25 ..
drwxr-xr-x 3 root root 4096 Jun 29 02:25 kubernetes.io
/var/run/secrets/kubernetes.io/
/var/run/secrets/kubernetes.io/serviceaccount http://kubernetes.io//var/run/secrets/kubernetes.io/serviceaccount


Reply to this email directly or view it on GitHub
#3072 (comment).

just /tmp/secrets/kubernetes.io/serviceaccount/, nothing more

After some debugging on IRC w/ @gravis today, we know the root cause of this issue. The issue is that in docker 1.7, the default mount propagation mode of bind-mounts was reverted to private, which will mean that mountpoints made in the openshift volume dir will not propagate into the OpenShift container's mount namespace. The PR that reverts the former shared propagation mode is:

moby/moby#13854

@smarterclayton

So, we will either need to have the shared mount mode restored as the default (slave would work too with our current approach that uses nsenter to do mounts when openshift/kubelet is containerized), or we will need to have a way to control the propagation mode of mounts.

@rootfs has a PR to docker to configure the daemon with the default propagation mode to use for bind-mounts, but I could see administrators wanting to make this a config item on the bind mount spec itself (a la :Z for selinux relabeling).

@rhatdan weren't we trying to get shared back?

On Jun 29, 2015, at 11:03 AM, Paul Morie notifications@github.com wrote:

After some debugging on IRC w/ @gravis https://github.com/gravis today,
we know the root cause of this issue. The issue is that in docker 1.7, the
default mount propagation mode of bind-mounts was reverted to private,
which will mean that mountpoints made in the openshift volume dir will not
propagate into the OpenShift container's mount namespace. The PR that
reverts the former shared propagation mode is:

moby/moby#13854 moby/moby#13854

@smarterclayton https://github.com/smarterclayton

So, we will either need to have the shared mount mode restored as the
default (slave would work too with our current approach that uses nsenter
to do mounts when openshift/kubelet is containerized), or we will need to
have a way to control the propagation mode of mounts.

@rootfs https://github.com/rootfs has a PR to docker to configure the
daemon with the default propagation mode to use for bind-mounts, but I
could see administrators wanting to make this a config item on the bind
mount spec itself (a la :Z for selinux relabeling).


Reply to this email directly or view it on GitHub
#3072 (comment).

Let's prioritize this fix highly - make sure Dan and Mrunal and others are
aware given its impact.

On Jun 29, 2015, at 11:03 AM, Paul Morie notifications@github.com wrote:

After some debugging on IRC w/ @gravis https://github.com/gravis today,
we know the root cause of this issue. The issue is that in docker 1.7, the
default mount propagation mode of bind-mounts was reverted to private,
which will mean that mountpoints made in the openshift volume dir will not
propagate into the OpenShift container's mount namespace. The PR that
reverts the former shared propagation mode is:

moby/moby#13854 moby/moby#13854

@smarterclayton https://github.com/smarterclayton

So, we will either need to have the shared mount mode restored as the
default (slave would work too with our current approach that uses nsenter
to do mounts when openshift/kubelet is containerized), or we will need to
have a way to control the propagation mode of mounts.

@rootfs https://github.com/rootfs has a PR to docker to configure the
daemon with the default propagation mode to use for bind-mounts, but I
could see administrators wanting to make this a config item on the bind
mount spec itself (a la :Z for selinux relabeling).


Reply to this email directly or view it on GitHub
#3072 (comment).

Summary: mount propagation is PRIVATE in Docker in 1.7. This means that volumes mounted on the host can't be seen when the host's directory is mounted into the container. This means that a Kubelet in a container cannot properly mount secrets, which are pretty fundamental to the operation of most pods.

Some solutions have been proposed for Docker 1.8. A patch has been proposed for Fedora/CentOS/RHEL systems for Docker 1.7 that would continue to use SHARED as the mount mode, but that will not work on other operating systems.

Paul, can you make sure there is a tracking issue in Kube for this (containerized Kubelet) that references this issue? We need to make sure we can drive this to closure.

Added an error to the readme with #3550

Good Man .. Thank you Clayton ☺

Regards,
Ian.

What we're going to do in the short-term is get the patch that makes mount propagation 'slave' as the default into the RH build of docker 1.7.

Longer-term, @rhatdan's team is going to try to get the right knobs for mount-propagation modes into upstream docker.

I still have issues with the latest images as well wrt failed to get pid of docker-daemon. I wrote up a quick docker-compose.yml for this:

openshift:
    image: openshift/origin
    privileged: true
    net: host
    volumes:
        - /:/rootfs:ro
        - /var/run:/var/run:rw
        - /sys:/sys:ro
        - /var/lib/docker:/var/lib/docker:rw
        - /var/lib/openshift/openshift.local.volumes:/var/lib/openshift/openshift.local.volumes
    environment:
        - PORT=8443
        - VIRTUALHOST=openshift.vz1.bne.shortcircuit.net.au
    command: start

@prologic : You need docker 1.8 (not released yet), or a redhat patched docker (like in fedora 21)

Ahh I see; Thanks!

Docker 1.8 is out, will it work out of the box, or openshift needs some code to specify mount propagation?

To answer myself, no, it won't work out of the box. Still waiting for these particular PR to be merged:

Damn :/ I just upgraded to Docker 1.8.0 too :P

@mrunalp @pmorie is there an update to the PR?

On Wed, Aug 12, 2015 at 5:11 PM, James Mills notifications@github.com
wrote:

Damn :/ I just upgraded to Docker 1.8.0 too :P


Reply to this email directly or view it on GitHub
#3072 (comment).

Clayton Coleman | Lead Engineer, OpenShift

@smarterclayton We are close to having an updated PR. It should be out by tomorrow.

@mrunalp : Any news regarding your PR? Thanks

Sweet. Will follow that.
Thanks!

@smarterclayton were we bumping the required docker version because of the exec hang in 1.6.2? if so, are we bumping to 1.7.x, and does this become higher priority because of that?

Mount propagation only ever worked in redhat builds.

On Sep 10, 2015, at 12:15 PM, Jordan Liggitt notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton were we bumping the
required docker version because of the exec hang in 1.6.2? if so, are we
bumping to 1.7.x, and does this become higher priority because of that?


Reply to this email directly or view it on GitHub
#3072 (comment).

Could you list the workaround steps for this configuration, please?

root@openshift:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.3 LTS
Release:    14.04
Codename:   trusty
root@openshift:~# uname -a
Linux openshift 3.19.0-28-generic #30~14.04.1-Ubuntu SMP Tue Sep 1 09:32:55 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@openshift:~# docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 05:37:18 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 05:37:18 UTC 2015
 OS/Arch:      linux/amd64
root@openshift:~# docker exec origin oc version
oc v1.0.6-622-g47d1103
kubernetes v1.1.0-alpha.1-653-g86b4e77

At this time on Ubuntu you'd need to run directly on the host - containerized OpenShift is blocked without the above mentioned docker PRs being merged. That is the default install mode for something like the Ansible installer.