shims `0.9.2` doesn't work on cgroup v1 ubuntu
Mossaka opened this issue · 0 comments
Mossaka commented
Reproduce
Run the following commands on cgroup v1 Ubuntu 20.04.6 LTS
:
sudo k3d cluster create wasm-cluster --image ghcr.io/deislabs/containerd-wasm-shims/examples/k3d:v0.9.2 -p "8081:80@loadbalancer" --agents 2
kubectl apply -f https://github.com/deislabs/containerd-wasm-shims/raw/main/deployments/workloads/runtime.yaml
kubectl apply -f https://github.com/deislabs/containerd-wasm-shims/raw/main/deployments/workloads/workload.yaml
Kubernetes Logs
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m25s default-scheduler Successfully assigned default/wasm-spin-8649cf7566-jssf2 to k3d-wasm-cluster-agent-1
Normal Pulling 5m24s kubelet Pulling image "ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.9.2"
Normal Pulled 5m23s kubelet Successfully pulled image "ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.9.2" in 1.553587008s (1.553596909s including waiting)
Normal Pulled 3m41s (x4 over 5m20s) kubelet Container image "ghcr.io/deislabs/containerd-wasm-shims/examples/spin-rust-hello:v0.9.2" already present on machine
Normal Created 3m41s (x5 over 5m23s) kubelet Created container spin-hello
Warning Failed 3m41s (x5 over 5m21s) kubelet Error: failed to create containerd task: failed to create shim task: Others("failed to receive. \"waiting for init ready\". BrokenChannel"): unknown
Warning BackOff 17s (x25 over 5m19s) kubelet Back-off restarting failed container spin-hello in pod wasm-spin-8649cf7566-jssf2_default(567f5b2a-35ae-46ed-87c4-76b74b371238)
Containerd logs
time="2023-11-01T00:30:17.44428603Z" level=info msg="found manifest with WASM OCI image format."
time="2023-11-01T00:30:17.445159344Z" level=info msg="cgroup manager V1 will be used"
time="2023-11-01T00:30:17.45954827Z" level=error msg="failed to canonicalize "/sys/fs/cgroup/systemd/docker/8a32364d3653534991bb0b9dee5564982a08dae797ad10900d458942a9038b5e": No such file or directory (os error 2)"
time="2023-11-01T00:30:17.460004177Z" level=error msg="failed to mount Mount { destination: "/sys/fs/cgroup/systemd", typ: Some("bind"), source: Some("/sys/fs/cgroup/systemd/docker/8a32364d3653534991bb0b9dee5564982a08dae797ad10900d458942a9038b5e"), options: Some(["rw", "rbind"]) }: io error"
time="2023-11-01T00:30:17.460963692Z" level=error msg="failed to mount systemd cgroup hierarchy: io error"
time="2023-11-01T00:30:17.461164595Z" level=error msg="failed to mount cgroup v2: io error"
time="2023-11-01T00:30:17.461311097Z" level=error msg="failed to prepare rootfs err=Mount(Io(Os { code: 2, kind: NotFound, message: "No such file or directory" }))"
time="2023-11-01T00:30:17.461929807Z" level=error msg="failed to initialize container process: failed to prepare rootfs"
time="2023-11-01T00:30:17.462512216Z" level=error msg="failed to wait for init ready: failed to receive. "waiting for init ready". BrokenChannel"
time="2023-11-01T00:30:17.462560817Z" level=error msg="failed to run container process err=Channel(ReceiveError { msg: "waiting for init ready", source: BrokenChannel })"
This might be an issue upstream in runwasi.