Docker build with GitHub Actions failed on multi arch with node:alpine
manuc66 opened this issue ยท 8 comments
Environment
- Platform: linux/amd64
- Docker Version: 20.10.18+azure-2
- Node.js Version: 14.20.1
- Image Tag: node:alpine
Expected Behavior
The build is expected to complete without error, see: https://github.com/manuc66/node-hp-scan-to/actions/runs/3278864778/jobs/5397750728
Current Behavior
The build fail due to a timeout, see : https://github.com/manuc66/node-hp-scan-to/actions/runs/3304345713/jobs/5453599065
#29 [linux/arm/v7 build 4/4] RUN yarn install -d && yarn build && rm dist/*.d.ts dist/*.js.map
Error: The operation was canceled.
Possible Solution
I switched back to node:18-alpine and the build is working again, see: https://github.com/manuc66/node-hp-scan-to/actions/runs/3356995905/jobs/5562501857
This also looks to be affecting another project: https://github.com/benphelps/homepage/actions/runs/3382149679
Going from
-FROM node:current-alpine
+FROM docker.io/node:18-alpine
fixes it. Hard for me to tell if this is a problem with the image or Node at this point.
I've stumbled upon the same issue in n-thumann/IPTV-ReStream#188.
It can be easily reproduced locally using docker run --rm arm32v7/node:19-alpine node -h
. Simply printing the help hangs indefinitely for both arm32v6 and arm32v7, while any other architecture or prior Node.js release prints the help immediately.
This may be an issue related to / in qemu, because running the same command on a natively on Alpine @ armv7 (Raspberry Pi in this case) doesn't cause any issues:
alpine:/tmp# uname -m
armv7l
alpine:/tmp# ./node -v
v19.0.0
alpine:/tmp# ./node -h
Usage: node [options] [ script.js ] [arguments]
[... snip ...]
I also tried to strace and noticed that node -h
yields a ton of [pid 52] mremap(0x8080c000, 4096, 8192, 0) = -1 ENOMEM (Out of memory)
while hanging. Maybe someone else has an idea what to do here ๐ค
This is the same issue as #1794.
On Node 18 everything is OK, on Node 19 (latest
) npm
sticks on some architectures. Seems to be upstream issue: npm/cli#5743
Still reproduces on Node v20 images.
I did some investigation and it looks like this is caused by qemu, which is used internally in Docker to emulate foreign architectures, e.g. when running a ARMv7 binary on a x86_64 host.
This can be reproduced via:
โ ~ docker run -it --rm --platform linux/arm/v7 alpine:3.18.2 sh
/ # apk add nodejs-current=20.5.0-r0
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/armv7/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/armv7/APKINDEX.tar.gz
(1/11) Installing ca-certificates (20230506-r0)
(2/11) Installing libgcc (12.2.1_git20220924-r10)
(3/11) Installing libstdc++ (12.2.1_git20220924-r10)
(4/11) Installing ada-libs (2.5.1-r0)
(5/11) Installing brotli-libs (1.0.9-r14)
(6/11) Installing c-ares (1.19.1-r0)
(7/11) Installing icu-data-en (73.2-r2)
Executing icu-data-en-73.2-r2.post-install
*
* If you need ICU with non-English locales and legacy charset support, install
* package icu-data-full.
*
(8/11) Installing icu-libs (73.2-r2)
(9/11) Installing nghttp2-libs (1.55.1-r0)
(10/11) Installing libuv (1.44.2-r2)
(11/11) Installing nodejs-current (20.5.0-r0)
Executing busybox-1.36.1-r0.trigger
Executing ca-certificates-20230506-r0.trigger
OK: 53 MiB in 26 packages
/ # wget https://dl-cdn.alpinelinux.org/alpine/v3.18/community/x86_64/qemu-arm-8.0.3-r1.apk
Connecting to dl-cdn.alpinelinux.org (146.75.118.132:443)
saving to 'qemu-arm-8.0.3-r1.apk'
qemu-arm-8.0.3-r1.ap 100% |*******************************************************************************************************************************************************| 1218k 0:00:00 ETA
'qemu-arm-8.0.3-r1.apk' saved
/ #
/ # apk add --allow-untrusted qemu-arm-8.0.3-r1.apk
(1/1) Installing qemu-arm (8.0.3-r1)
Executing busybox-1.36.1-r0.trigger
OK: 56 MiB in 27 packages
/ # qemu-arm -version
qemu-arm version 8.0.3
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers
/ # /usr/bin/node -v
v20.5.0
/ # qemu-arm -strace /usr/bin/node -h
[...]
55 open("/proc/meminfo",O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 17
55 fcntl64(17,F_SETFD,1) = 0
55 read(17,0x3fffeb2c,4095) = 1363
55 close(17) = 0
55 mremap(1073733632,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073729536,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073725440,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073721344,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073717248,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073713152,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073709056,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073704960,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
55 mremap(1073700864,4096,8192,0,0,4096) = -1 errno=12 (Out of memory)
[...]
This seems to be indeed caused by a bug in qemu and is tracked in https://gitlab.com/qemu-project/qemu/-/issues/1729.
It seems that this is resolved with QEMU 8.1.0. There are still many mremap
syscalls, but the same commands above complete when using https://dl-cdn.alpinelinux.org/alpine/v3.19/community/x86_64/qemu-arm-8.1.3-r1.apk.
With tonistiigi/binfmt#144 (QEMU 8.1.4) and node:21
it still doesn't work for me on arm32v6
, arm32v7
and s390x
platforms. Tried building on both macos-latest
and ubuntu-latest
runners: