Better handle exhaustion of inotify instances
xavierog opened this issue · 1 comments
xavierog commented
Description
buildah segfaults upon lack of inotify instances.
Note: upon inspection of the stacktrace, I guessed the underlying issue, closed a few applications, launched podman build
again and this time it ran flawlessly.
Steps to reproduce the issue:
- Get a Linux system where
/proc/sys/fs/inotify/max_user_instances
is a little low (e.g. on Debian Sid, it seems to be 128) compared to your actual usage (which can be inspected using inotify-info). - Run a buildah/podman
build
command. You should get the stacktrace below.
Describe the results you received:
STEP 1/10: FROM docker.io/library/debian:stable-20240812-slim
STEP 2/10: COPY README.md pyproject.toml setup.py /tmp/moulti
STEP 3/10: COPY examples /tmp/moulti/examples
STEP 4/10: COPY src/moulti /tmp/moulti/src/moulti
STEP 5/10: ENV PIPX_HOME=/opt/pipx PIPX_BIN_DIR=/usr/local/bin PIPX_MAN_DIR=/usr/local/share/man
STEP 6/10: RUN unlink /etc/apt/apt.conf.d/docker-clean && apt update && apt install --no-install-recommends -y pipx xclip && pipx install /tmp/moulti && mkdir /export && rm -rf /tmp/moulti /root/.cache && rm -rf /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin /var/lib/apt/lists/deb*
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x114573a]
goroutine 63 [running]:
github.com/fsnotify/fsnotify.(*Watcher).isClosed(...)
github.com/fsnotify/fsnotify/backend_inotify.go:296
github.com/fsnotify/fsnotify.(*Watcher).AddWith(0x0, {0x1bdb4bf, 0x8}, {0x0, 0x0, 0xc0003e7320?})
github.com/fsnotify/fsnotify/backend_inotify.go:372 +0x3a
github.com/fsnotify/fsnotify.(*Watcher).Add(...)
github.com/fsnotify/fsnotify/backend_inotify.go:362
tags.cncf.io/container-device-interface/pkg/cdi.(*watch).update(0xc00039e050, 0xc0004ec0c0, {0x0, 0x0, 0xc0003e7350?})
tags.cncf.io/container-device-interface/pkg/cdi/cache.go:572 +0xd9
tags.cncf.io/container-device-interface/pkg/cdi.(*Cache).refreshIfRequired(0xc0003da140, 0x0?)
tags.cncf.io/container-device-interface/pkg/cdi/cache.go:217 +0x38
tags.cncf.io/container-device-interface/pkg/cdi.(*Cache).Refresh(0xc0003da140)
tags.cncf.io/container-device-interface/pkg/cdi/cache.go:130 +0xa6
github.com/containers/buildah.(*Builder).cdiSetupDevicesInSpec(0xc000424008, {0x2cefe00, 0x0, 0x0}, {0x0, 0x0}, 0xc000148750)
github.com/containers/buildah/run_linux.go:95 +0x21b
github.com/containers/buildah.(*Builder).Run(_, {_, _, _}, {0xc000119880, {0x0, 0x0}, 0x0, {0x1bd46ac, 0x4}, ...})
github.com/containers/buildah/run_linux.go:226 +0xb13
github.com/containers/buildah/imagebuildah.(*StageExecutor).Run(_, {0x1, {0xc00070c320, 0x1, 0x1}, {0x0, 0x0, 0x0}, {0x0, 0x0}, ...}, ...)
github.com/containers/buildah/imagebuildah/stage_executor.go:880 +0x1330
github.com/openshift/imagebuilder.(*Builder).Run(0xc000139b08, 0xc0004e81e0, {0x1f2f428, 0xc0005e3d10}, 0x1)
github.com/openshift/imagebuilder/builder.go:537 +0x507
github.com/containers/buildah/imagebuildah.(*StageExecutor).Execute(0xc0005e3d10, {0x1f28e08, 0xc00037f2f0}, {0xc0003b3e30, 0x2d})
github.com/containers/buildah/imagebuildah/stage_executor.go:1410 +0x1439
github.com/containers/buildah/imagebuildah.(*Executor).buildStage(0xc000203808, {0x1f28e08, 0xc00037f2f0}, 0xc0004802d0, {0xc0004802a0, 0x1, 0x1}, 0x0)
github.com/containers/buildah/imagebuildah/executor.go:583 +0x592
github.com/containers/buildah/imagebuildah.(*Executor).Build.func3.1()
github.com/containers/buildah/imagebuildah/executor.go:950 +0x2b1
created by github.com/containers/buildah/imagebuildah.(*Executor).Build.func3 in goroutine 61
github.com/containers/buildah/imagebuildah/executor.go:921 +0x2db
Describe the results you expected:
An error message like "unable to allocate inotify instance" would be nice.
Output of rpm -q buildah
or apt list buildah
:
buildah/unstable,now 1.37.1+ds1-2 amd64 [installed,automatic]
Output of buildah version
:
Version: 1.37.1
Go Version: go1.22.6
Image Spec: 1.1.0
Runtime Spec: 1.2.0
CNI Spec: 1.0.0
libcni Version:
image Version: 5.33.1
Git Commit:
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
BuildPlatform: linux/amd64
Output of podman version
if reporting a podman build
issue:
Client: Podman Engine
Version: 5.2.1
API Version: 5.2.1
Go Version: go1.22.6
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
Output of cat /etc/*release
:
PRETTY_NAME="Debian GNU/Linux trixie/sid"
NAME="Debian GNU/Linux"
VERSION_CODENAME=trixie
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Output of uname -a
:
Linux huxley 6.10.6-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.10.6-1 (2024-08-19) x86_64 GNU/Linux
Output of cat /etc/containers/storage.conf
:
[storage]
driver = "overlay"
[storage.options.overlay]
mount_program = "/usr/bin/fuse-overlayfs"
github-actions commented
A friendly reminder that this issue had no activity for 30 days.