podman with systemd+healthcheck results conmon error
leleobhz opened this issue · 7 comments
As reported at containers/podman#19426 and asked for by @mheon, refilling this as a Podman issue.
This error happens when a container running by systemd - generated by podman generate systemd --new
with healthcheck starts, leading to conman erros when healthcheck runs as the log above and the original discussion points.
Infos:
Excerpt of journalctl --boot=0 | grep -E '(conmon|podman)'
:
ago 08 10:15:29 miriam systemd[1]: Started 009bd81fcec22595226dc9f04ef06d6aa551c3758501697c4bc3c05347ae739f.service - /usr/bin/podman healthcheck run 009bd81fcec22595226dc9f04ef06d6aa551c3758501697c4bc3c05347ae739f.
ago 08 10:15:29 miriam podman[50278]: 2023-08-08 10:15:29.410535446 -0300 -03 m=+0.101651773 container health_status 009bd81fcec22595226dc9f04ef06d6aa551c3758501697c4bc3c05347ae739f (image=quay.io/zenithtecnologia/zerotier-docker:dev, name=zerotier, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Container with the openssl binary, giving ability to work with cryptographic keys and certificates needed for web servers., org.zerotier.version=, vcs-ref=20b10499f1a156388a02509bfc295a5abf80b71c, architecture=x86_64, quay.expires-after=never, vendor=Red Hat, Inc., PODMAN_SYSTEMD_UNIT=zerotier-one-podman.service, io.buildah.version=1.23.1, build-date=2023-07-27T14:12:52, vcs-type=git, version=9.2, distribution-scope=public, summary=ZeroTier - a smart programmable Ethernet switch for planet Earth., com.redhat.component=openssl-container, maintainer=Zenith Tecnologia <dev@zenithtecnologia.com.br>, release=10, io.k8s.display-name=zerotier, name=zerotier, url=https://github.com/ZenithTecnologia/zerotier-docker, io.k8s.description=This container runs Zerotier - a smart programmable Ethernet switch for planet Earth.)
ago 08 10:15:29 miriam conmon[50297]: conmon 009bd81fcec22595226d <error>: Unable to send container stderr message to parent Broken pipe
ago 08 10:15:29 miriam podman[50278]: 2023-08-08 10:15:29.433546714 -0300 -03 m=+0.124663011 container exec_died 009bd81fcec22595226dc9f04ef06d6aa551c3758501697c4bc3c05347ae739f (image=quay.io/zenithtecnologia/zerotier-docker:dev, name=zerotier, description=Container with the openssl binary, giving ability to work with cryptographic keys and certificates needed for web servers., version=9.2, distribution-scope=public, com.redhat.component=openssl-container, io.k8s.display-name=zerotier, org.zerotier.version=, PODMAN_SYSTEMD_UNIT=zerotier-one-podman.service, maintainer=Zenith Tecnologia <dev@zenithtecnologia.com.br>, vcs-ref=20b10499f1a156388a02509bfc295a5abf80b71c, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.23.1, url=https://github.com/ZenithTecnologia/zerotier-docker, build-date=2023-07-27T14:12:52, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=zerotier, release=10, summary=ZeroTier - a smart programmable Ethernet switch for planet Earth., architecture=x86_64, io.k8s.description=This container runs Zerotier - a smart programmable Ethernet switch for planet Earth., quay.expires-after=never)
podman info
:
host:
arch: amd64
buildahVersion: 1.31.0
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- rdma
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-2.fc38.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: '
cpuUtilization:
idlePercent: 73.78
systemPercent: 3.91
userPercent: 22.31
cpus: 16
databaseBackend: boltdb
distribution:
distribution: fedora
variant: workstation
version: "38"
eventLogger: journald
freeLocks: 2030
hostname: miriam
idMappings:
gidmap: null
uidmap: null
kernel: 6.4.7-200.fc38.x86_64
linkmode: dynamic
logDriver: journald
memFree: 13742452736
memTotal: 33310380032
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.7.0-1.fc38.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.7.0
package: netavark-1.7.0-1.fc38.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.7.0
ociRuntime:
name: crun
package: crun-1.8.6-1.fc38.x86_64
path: /usr/bin/crun
version: |-
crun version 1.8.6
commit: 73f759f4a39769f60990e7d225f561b4f4f06bcf
rundir: /run/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt-0^20230625.g32660ce-1.fc38.x86_64
version: |
pasta 0^20230625.g32660ce-1.fc38.x86_64
Copyright Red Hat
GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-12.fc38.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 8589930496
swapTotal: 8589930496
uptime: 1h 7m 1.00s (Approximately 0.04 days)
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
- quay.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 2
paused: 0
running: 1
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 118329704448
graphRootUsed: 70370422784
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 3
runRoot: /run/containers/storage
transientStore: false
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.6.0
Built: 1689942206
BuiltTime: Fri Jul 21 09:23:26 2023
GitCommit: ""
GoVersion: go1.20.6
Os: linux
OsArch: linux/amd64
Version: 4.6.0
screenfetch -d 'distro;+cpu;+kernel;+gpu;+mem'
:
/:-------------:\ OS: Fedora
:-------------------:: Kernel: x86_64 Linux 6.4.7-200.fc38.x86_64
:-----------/shhOHbmp---:\ CPU: Intel Core i7-7820X @ 16x 4.3GHz [31.0°C]
/-----------omMMMNNNMMD ---: GPU: NVIDIA GeForce RTX 2060
:-----------sMMMMNMNMP. ---: RAM: 6917MiB / 31767MiB
:-----------:MMMdP------- ---\
,------------:MMMd-------- ---:
:------------:MMMd------- .---:
:---- oNMMMMMMMMMNho .----:
:-- .+shhhMMMmhhy++ .------/
:- -------:MMMd--------------:
:- --------/MMMd-------------;
:- ------/hMMMy------------:
:-- :dMNdhhdNMMNo------------;
:---:sdNMMMMNds:------------:
:------:://:-------------::
:---------------------://
Originally posted by @leleobhz in containers/podman#19426 (comment)
I think it is better to handle it in conmon and not fail when the sync pipe is broken: #439
That could happen because Podman exited earlier
Hello @flyingfishflash
You comment apparently got a fix. it also affected me, so I turned into issue as requested in your question.
Hey thank you for transferring to issues from discussion, and the subsequent fix!
containers/podman#19426 (comment)
I've posted in the discussion above, but there was no reply.
My journal gets filled with <error>: Unable to send container stderr message to parent Broken pipe
.
The containers are all healthy...
That seems like a separate issue; please file a fresh bug report.
containers/podman#19426 (comment)
I've posted in the discussion above, but there was no reply.
My journal gets filled with
<error>: Unable to send container stderr message to parent Broken pipe
. The containers are all healthy...
I'm having the same issue. Please link to the bug report you open on it from here so people can find it when googling.
Here you go: #454