ssh plugin not working with k3d - MountVolume.SetUp failed for volume "docker" : hostPath type check failed: /var/run/docker.sock is not a file
Closed this issue · 3 comments
I'm trying to exec into a pod as root in my k3d cluster but after a while I always get a ssh-pod-24136 error: timed out waiting for the condition
error message.
I can see that the ssh plugin tries to spin up an ssh-pod
, but immediately after start the pod fails with the following error message:
MountVolume.SetUp failed for volume "docker" : hostPath type check failed: /var/run/docker.sock is not a file
Am I doing something wrong?
Sorry, @mamiu, I haven't the slightest idea how k3d works. Maybe someone who uses it will see this and can offer some insight?
No problem @jordanwilson230. But it's quite simple.
k3d does nothing else than wrapping multiple k3s instances in docker containers. (k3s is basically the same as kubernetes, but lightweight). So you can setup a kubernetes cluster in minutes (mainly for testing and local development purposes). You should definitely try it out. 😉
I figured out how to login as root. Below are the steps which are necessary to integrate k3d support into your script.
Let's assume we have a pod called nginx
running in the namespace nginx-test
.
kubectl create namespace nginx-test
kubectl run nginx --image=nginx -n nginx-test
1. Check if the current cluster is a k3d cluster
If the following command outputs k3d
, it's a k3d cluster:
kubectl get node --selector "node-role.kubernetes.io/master=true" -o name | sed 's/.*\///' | cut -c -3
2. Get the node on which the pod is running
kubectl get pod nginx -n nginx-test -o jsonpath="{.spec.nodeName}"
On my demo cluster it's k3d-demo-server-0
.
3. Get the container ID of the pod
(This command is only applicable if there's just one container in the pod. If there are multiple container within the pod, this case must be handled separately.)
kubectl get pod nginx -n nginx-test -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's/.*\/\///'
In my test the output was 6d100587c71c60facd6d6ef4e18bd4e085b29453d1866bfc736a9035d9848820
.
4. Exec into the k3d node (which is a docker container) where the pod is running
The name of the container is the output of step 2 (which is k3d-demo-server-0
for me).
docker exec -it k3d-demo-server-0 sh
5. Exec into the pod container
NOTE: Since the
k3s crictl exec
command has no option to specify the login user we have to use therunc
tool instead.
The runc
command is the "CLI tool for spawning and running containers according to the OCI specification".
The --user
(or -u
) option needs the UID of the user which you want to log in with (0
in case of root). From the doc: --user value, -u value | value: UID (format: <uid>[:<gid>])
We also have to specify the root path of the containers, which is /run/containerd/runc/k8s.io/
.
So we have to execute the following command in order to be able to log into the pod as root:
runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 6d100587c71c60facd6d6ef4e18bd4e085b29453d1866bfc736a9035d9848820 sh
Very nice @mamiu! Glad you figured it out :D Also, thanks for updating the issue with that solution!