Running boilerplate:image-v1.0.0 with --userns keep-id first time takes 70min and increases image size +4g
T0MASD opened this issue · 6 comments
When trying to start boilerplate container for the first time on a new f34 vm and --userns keep-id
is passed:
Podman takes 70min to run and boilerplate image increases by 4gb. When --userns keep-id
is not used, container starts within seconds. Is this expected behaviour?
Here's the debug log:
podman versions
Begin time : Fri 25 Jun 2021 09:35:10 AM UTC
Begin rpmdb : 956:4b115748d13b47de1955c90c04da11e6841e983c
End time : Fri 25 Jun 2021 09:36:40 AM UTC (90 seconds)
End rpmdb : 976:e5c3e45bfe5a3b6ea17db447438af273c3c5e541
User : <tomas>
Return-Code : Success
Releasever : 34
Command Line :
Comment :
Packages Altered:
Install buildah-1.21.0-1.fc34.x86_64 @os
Install catatonit-0.1.5-4.fc34.x86_64 @os
Install conmon-2:2.0.27-2.fc34.x86_64 @os
Install container-selinux-2:2.163.0-1.fc34.noarch @os
Install containernetworking-plugins-1.0.0-0.2.rc1.fc34.x86_64 @os
Install containers-common-4:1-19.fc34.noarch @os
Install criu-3.15-3.fc34.x86_64 @os
Install criu-libs-3.15-3.fc34.x86_64 @os
Install crun-0.20.1-1.fc34.x86_64 @os
Install dnsmasq-2.85-1.fc34.x86_64 @os
Install fuse-overlayfs-1.5.0-1.fc34.x86_64 @os
Install fuse3-3.10.4-1.fc34.x86_64 @os
Install libbsd-0.10.0-7.fc34.x86_64 @os
Install libnet-1.2-2.fc34.x86_64 @os
Install libslirp-4.4.0-2.fc34.x86_64 @os
Install podman-3:3.2.1-1.fc34.x86_64 @os
Install podman-compose-0.1.7-4.git20210129.fc34.noarch @os
Install podman-plugins-3:3.2.1-1.fc34.x86_64 @os
Install slirp4netns-1.1.9-1.fc34.x86_64 @os
Install yajl-2.1.0-16.fc34.x86_64 @os
Pull image
[tomas@dev-vm managed-upgrade-operator]$ podman pull quay.io/app-sre/boilerplate:image-v1.0.0
Trying to pull quay.io/app-sre/boilerplate:image-v1.0.0...
Getting image source signatures
Copying blob 875a3c098773 done
Copying blob 7cf645468759 done
Copying blob d5e1781397c5 done
Copying blob 041d59463982 done
Copying config 4070d12a8d done
Writing manifest to image destination
Storing signatures
4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
Show images
[tomas@dev-vm managed-upgrade-operator]$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/app-sre/boilerplate image-v1.0.0 4070d12a8d3f 5 weeks ago 2.65 GB
Run image without --userns keep-id
[tomas@dev-vm managed-upgrade-operator]$ time /usr/bin/podman run -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done
d749b687b2aee1d17ef2c646e2aca7275b3e17ee683869eab356a6a98bcb51a8
real 0m1.393s
user 0m0.161s
sys 0m0.122s
[tomas@dev-vm managed-upgrade-operator]$ podman rm $(podman ps -aq)
d749b687b2aee1d17ef2c646e2aca7275b3e17ee683869eab356a6a98bcb51a8
Set log level to debug and run container with --userns keep-id
[tomas@dev-vm managed-upgrade-operator]$ time /usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done
INFO[0000] /usr/bin/podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(/usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done)
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/tomas/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/tomas/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/tomas/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/tomas/.local/share/containers/storage/volumes
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: opaque flag erroneously copied up, consider update to kernel 4.8 or later to fix
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] User mount /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator options [Z]
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage ([overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849)
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage ([overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849)
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage ([overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849)
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Allocated lock 0 for container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] overlay: mount_data=,lowerdir=/home/tomas/.local/share/containers/storage/overlay/l/MNZJPZ3W2AGTDD2HC4525YSUUI:/home/tomas/.local/share/containers/storage/overlay/l/CNCVADD6DSMOLBVFSBWNPUEHOM:/home/tomas/.local/share/containers/storage/overlay/l/BVQ3WNLTQXTPUP3BOQ4WKUP4YK:/home/tomas/.local/share/containers/storage/overlay/l/PP7FHT63BMSMKCXOKA4BBW6ACH,upperdir=/home/tomas/.local/share/containers/storage/overlay/47e71c685a09eb82f2404234565e049e2521fd7c024f3ab2d610287ab096343d/diff,workdir=/home/tomas/.local/share/containers/storage/overlay/47e71c685a09eb82f2404234565e049e2521fd7c024f3ab2d610287ab096343d/work,userxattr
....waiting here for a long time.....
DEBU[4506] created container "340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4"
DEBU[4506] container "340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4" has work directory "/home/tomas/.local/share/containers/storage/overlay-containers
/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata"
DEBU[4506] container "340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4" has run directory "/run/user/1000/containers/overlay-containers/340bad722e32ec5f43
8351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata"
DEBU[4506] [graphdriver] trying provided driver "overlay"
DEBU[4506] cached value indicated that overlay is supported
DEBU[4506] cached value indicated that metacopy is not being used
DEBU[4506] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[4506] overlay: mount_data=,lowerdir=/home/tomas/.local/share/containers/storage/overlay/l/MJBDUS3AHCLSVJ3VOZNHMMZ6K2:/home/tomas/.local/share/containers/storage/over
lay/l/MJBDUS3AHCLSVJ3VOZNHMMZ6K2/../diff1:/home/tomas/.local/share/containers/storage/overlay/l/MNZJPZ3W2AGTDD2HC4525YSUUI:/home/tomas/.local/share/containers/storage/ove
rlay/l/CNCVADD6DSMOLBVFSBWNPUEHOM:/home/tomas/.local/share/containers/storage/overlay/l/BVQ3WNLTQXTPUP3BOQ4WKUP4YK:/home/tomas/.local/share/containers/storage/overlay/l/P
P7FHT63BMSMKCXOKA4BBW6ACH,upperdir=/home/tomas/.local/share/containers/storage/overlay/f2277113e202de472896022e192feeeb5c2bfb2c02764f89a9e463121483895d/diff,workdir=/home
/tomas/.local/share/containers/storage/overlay/f2277113e202de472896022e192feeeb5c2bfb2c02764f89a9e463121483895d/work,userxattr,context="system_u:object_r:container_file_t
:s0:c229,c299"
DEBU[4506] mounted container "340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4" at "/home/tomas/.local/share/containers/storage/overlay/f2277113e202de4728
96022e192feeeb5c2bfb2c02764f89a9e463121483895d/merged"
DEBU[4506] Created root filesystem for container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 at /storage/ram/containers/storage/overlay/f2277113e202d
e472896022e192feeeb5c2bfb2c02764f89a9e463121483895d/merged
DEBU[4506] Workdir "/go/src/github.com/openshift/origin" resolved to host path "/storage/ram/containers/storage/overlay/f2277113e202de472896022e192feeeb5c2bfb2c02764f89a9
e463121483895d/merged/go/src/github.com/openshift/origin"
DEBU[4506] Modifying container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 /etc/passwd
DEBU[4506] Modifying container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 /etc/group
DEBU[4506] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[4506] Setting CGroups for container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 to user.slice:libpod:340bad722e32ec5f438351269afea435deb1f6087d7
b2b3ec030e656392c88f4
DEBU[4506] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[4506] Created OCI spec for container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 at /home/tomas/.local/share/containers/storage/overlay-containers/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata/config.json
DEBU[4506] /usr/bin/conmon messages will be logged to syslog
DEBU[4506] running conmon: /usr/bin/conmon args="[--api-version 1 -c 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 -u 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 -r /usr/bin/crun -b /home/tomas/.local/share/containers/storage/overlay-containers/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata -p /run/user/1000/containers/overlay-containers/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata/pidfile -n cool_franklin --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/home/tomas/.local/share/containers/storage/overlay-containers/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/tomas/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4]"
INFO[4506] Running conmon under slice user.slice and unitName libpod-conmon-340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[4506] Received: 41746
INFO[4506] Got Conmon PID as 41742
DEBU[4506] Created container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 in OCI runtime
DEBU[4506] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 41746 tap0
DEBU[4506] Starting container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4 with command [echo done]
DEBU[4506] Started container 340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4
340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4
DEBU[4507] Called run.PersistentPostRunE(/usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done)
real 75m7.246s
user 0m7.661s
sys 0m49.477s
Remove container and look at images
Note size increased by 4gb
[tomas@dev-vm managed-upgrade-operator]$ podman rm $(podman ps -aq)
340bad722e32ec5f438351269afea435deb1f6087d7b2b3ec030e656392c88f4
[tomas@dev-vm managed-upgrade-operator]$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/app-sre/boilerplate image-v1.0.0 4070d12a8d3f 5 weeks ago 6.22 GB
Run container second time with --userns keep-id
[tomas@dev-vm managed-upgrade-operator]$ time /usr/bin/podman run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done
1ff928988513b7d884a8360249b93495d3e93309623faf0175e223da1bc8617b
real 0m0.786s
user 0m0.159s
sys 0m0.114s
[tomas@dev-vm managed-upgrade-operator]$ podman rm $(podman ps -aq)
1ff928988513b7d884a8360249b93495d3e93309623faf0175e223da1bc8617b
What fuse/overlay packages do you have installed?
this is mine. I have not had this issue in the past.
fuse-libs-2.9.9-11.fc34.x86_64
fuse-2.9.9-11.fc34.x86_64
fuse-overlayfs-1.5.0-1.fc34.x86_64
zfs-fuse-0.7.2.2-18.fc34.x86_64
gvfs-fuse-1.48.1-1.fc34.x86_64
glusterfs-fuse-9.2-1.fc34.x86_64
fuse3-libs-3.10.4-1.fc34.x86_64
fuse-common-3.10.4-1.fc34.x86_64
fuse3-3.10.4-1.fc34.x86_64
fuse-sshfs-3.7.2-1.fc34.x86_64
Version: 3.2.1
API Version: 3.2.1
Go Version: go1.16.3
Built: Tue Jun 15 05:12:29 2021
OS/Arch: linux/amd64
Thanks @dofinn here's what I had:
fuse-libs-2.9.9-11.fc34.x86_64
fuse-2.9.9-11.fc34.x86_64
fuse-overlayfs-1.5.0-1.fc34.x86_64
package zfs-fuse is not installed
package gvfs-fuse is not installed
package glusterfs-fuse is not installed
fuse3-libs-3.10.4-1.fc34.x86_64
fuse-common-3.10.4-1.fc34.x86_64
fuse3-3.10.4-1.fc34.x86_64
package fuse-sshfs is not installed
So I've installed the missing packages, now I've got:
fuse-libs-2.9.9-11.fc34.x86_64
fuse-2.9.9-11.fc34.x86_64
fuse-overlayfs-1.5.0-1.fc34.x86_64
zfs-fuse-0.7.2.2-18.fc34.x86_64
gvfs-fuse-1.48.1-1.fc34.x86_64
glusterfs-fuse-9.2-1.fc34.x86_64
fuse3-libs-3.10.4-1.fc34.x86_64
fuse-common-3.10.4-1.fc34.x86_64
fuse3-3.10.4-1.fc34.x86_64
fuse-sshfs-3.7.2-1.fc34.x86_64
I've restarted the vm (just in case) and ran podman rmi $(podman images -aq) -f
and sudo rm -rf ~/.local/share/containers/*
After vm came back I've re-ran
podman pull quay.io/app-sre/boilerplate:image-v1.0.0
Followed by
[tomas@dev managed-upgrade-operator]$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/app-sre/boilerplate image-v1.0.0 4070d12a8d3f 5 weeks ago 2.65 GB
[tomas@dev managed-upgrade-operator]$ time /usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/manage
d-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done
INFO[0000] /usr/bin/podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(/usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-o
perator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done)
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/tomas/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/tomas/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/tomas/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/tomas/.local/share/containers/storage/volumes
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: opaque flag erroneously copied up, consider update to kernel 4.8 or later to fix
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend journald
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
DEBU[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] User mount /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator options [Z]
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage ([overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849)
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Looking up image "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] Trying "quay.io/app-sre/boilerplate:image-v1.0.0" ...
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Found image "quay.io/app-sre/boilerplate:image-v1.0.0" as "quay.io/app-sre/boilerplate:image-v1.0.0" in local containers storage ([overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849)
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] Inspecting image 4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849
DEBU[0000] using systemd mode: false
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Allocated lock 0 for container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba
DEBU[0000] parsed reference into "[overlay@/home/tomas/.local/share/containers/storage+/run/user/1000/containers]@4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] exporting opaque data as blob "sha256:4070d12a8d3f1b7015c9ce7aa2bbfa191c3b5b4045c8ed559544be84779b4849"
DEBU[0000] overlay: mount_data=,lowerdir=/home/tomas/.local/share/containers/storage/overlay/l/C6CQBNDGIKZXHYOQPORFPL4DYB:/home/tomas/.local/share/containers/storage/overlay/l/M27HG6DD4RVBUBFPMU77KTSJVZ:/home/tomas/.local/share/containers/storage/overlay/l/NHBESBSTMJFUXM3HL7M625T32N:/home/tomas/.local/share/containers/storage/overlay/l/HF5UBIX7ZHSB6BD4DH6EGI4AFU,upperdir=/home/tomas/.local/share/containers/storage/overlay/57ccdc8303411edac2f086dc14983114ccf879b6eb219f7819304b508be3ff5b/diff,workdir=/home/tomas/.local/share/containers/storage/overlay/57ccdc8303411edac2f086dc14983114ccf879b6eb219f7819304b508be3ff5b/work,userxattr
... taking long time here ...
DEBU[2675] created container "4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba"
DEBU[2675] container "4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba" has work directory "/home/tomas/.local/share/containers/storage/overlay-containers
/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata"
DEBU[2675] container "4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba" has run directory "/run/user/1000/containers/overlay-containers/4485d1238ecb3ed9fe
648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata"
DEBU[2676] [graphdriver] trying provided driver "overlay"
DEBU[2676] cached value indicated that overlay is supported
DEBU[2676] cached value indicated that metacopy is not being used
DEBU[2676] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[2676] overlay: mount_data=,lowerdir=/home/tomas/.local/share/containers/storage/overlay/l/4I2Z3QCHJDJNVJOIUJ6RISXXFL:/home/tomas/.local/share/containers/storage/over
lay/l/4I2Z3QCHJDJNVJOIUJ6RISXXFL/../diff1:/home/tomas/.local/share/containers/storage/overlay/l/C6CQBNDGIKZXHYOQPORFPL4DYB:/home/tomas/.local/share/containers/storage/ove
rlay/l/M27HG6DD4RVBUBFPMU77KTSJVZ:/home/tomas/.local/share/containers/storage/overlay/l/NHBESBSTMJFUXM3HL7M625T32N:/home/tomas/.local/share/containers/storage/overlay/l/H
F5UBIX7ZHSB6BD4DH6EGI4AFU,upperdir=/home/tomas/.local/share/containers/storage/overlay/755686e8453a1d9f7d762ba648517a16e1c72864a0450269e89d84aea67960f9/diff,workdir=/home
/tomas/.local/share/containers/storage/overlay/755686e8453a1d9f7d762ba648517a16e1c72864a0450269e89d84aea67960f9/work,userxattr,context="system_u:object_r:container_file_t
:s0:c531,c548"
DEBU[2676] mounted container "4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba" at "/home/tomas/.local/share/containers/storage/overlay/755686e8453a1d9f7d
762ba648517a16e1c72864a0450269e89d84aea67960f9/merged"
DEBU[2676] Created root filesystem for container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba at /storage/ram/containers/storage/overlay/755686e8453a1
d9f7d762ba648517a16e1c72864a0450269e89d84aea67960f9/merged
DEBU[2676] Workdir "/go/src/github.com/openshift/origin" resolved to host path "/storage/ram/containers/storage/overlay/755686e8453a1d9f7d762ba648517a16e1c72864a0450269e8
9d84aea67960f9/merged/go/src/github.com/openshift/origin"
DEBU[2676] Modifying container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba /etc/passwd
DEBU[2676] Modifying container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba /etc/group
DEBU[2676] /etc/system-fips does not exist on host, not mounting FIPS mode subscription
DEBU[2676] Setting CGroups for container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba to user.slice:libpod:4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba
DEBU[2676] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[2676] Created OCI spec for container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba at /home/tomas/.local/share/containers/storage/overlay-containers/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata/config.json
DEBU[2676] /usr/bin/conmon messages will be logged to syslog
DEBU[2676] running conmon: /usr/bin/conmon args="[--api-version 1 -c 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba -u 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba -r /usr/bin/crun -b /home/tomas/.local/share/containers/storage/overlay-containers/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata -p /run/user/1000/containers/overlay-containers/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata/pidfile -n objective_merkle --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/home/tomas/.local/share/containers/storage/overlay-containers/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/tomas/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba]"
INFO[2676] Running conmon under slice user.slice and unitName libpod-conmon-4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
DEBU[2676] Received: 1623
INFO[2676] Got Conmon PID as 1619
DEBU[2676] Created container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba in OCI runtime
DEBU[2676] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 1623 tap0
DEBU[2676] Starting container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba with command [echo done]
DEBU[2676] Started container 4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba
4485d1238ecb3ed9fe648ae1cfb5ba5c186840aa4c7708a40e4450e6d3d914ba
DEBU[2676] Called run.PersistentPostRunE(/usr/bin/podman --log-level debug run --userns keep-id -d -v /storage/ram/tomas/Development/src/github/openshift/managed-upgrade-operator:/go/src/github.com/openshift/managed-upgrade-operator:Z quay.io/app-sre/boilerplate:image-v1.0.0 echo done)
real 44m36.751s
user 0m6.430s
sys 0m47.767s
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen
.
Mark the issue as fresh by commenting/remove-lifecycle rotten
.
Exclude this issue from closing again by commenting/lifecycle frozen
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.