Create new alpine targets, issue on alpine_latest distro.
Closed this issue · 6 comments
Hey!
I'm using the action to run some test right after compile some Python whells.
action:
- uses: uraimo/run-on-arch-action@v2.6.0
if: matrix.libc == 'musllinux_1_1'
name: Install built wheel musl
with:
arch: ${{ matrix.target}}
distro: alpine_latest
githubToken: ${{ github.token }}
install: |
apk add py3-pip
pip3 install -U pip
run: |
pip3 install package_whell --no-index --find-links dist/ --force-reinstall
It was working till January, then started to break CI actions with this error:
------
Dockerfile.armv7.alpine_latest:4
--------------------
2 |
3 | COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
4 | >>> RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh
5 |
--------------------
ERROR: failed to solve: process "/bin/sh -c chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh" did not complete successfully: exit code: 1
Error: The process '/home/runner/work/_actions/uraimo/run-on-arch-action/v2.6.0/src/run-on-arch.sh' failed with exit code 1
I think it's related to the alpine latest version,
If I want to use some previous version and freeze it, Do I need to create this manually?
v3.18
Hi, thanks for reporting this, you can specify a custom image instead of using arch/distro like this:
arch: none
distro: none
base_image: arm32v7/alpine:3.18
This should work.
More images here: https://hub.docker.com/r/arm32v7/alpine/
@uraimo thks! I figured this exactly right now o/,
But didn't notice that the architecture should be used, going to try it.
I was unable do build it locally because it can't find the install script.
cd Dockerfiles
docker build -t test_image -f Dockerfile.armv7.alpine_latest .
...
ERROR [2/3] COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
Then I found the src/run-on-arch things with this code, that seems to do some magics, ( And maybe create the "*-install.sh" ? )
// If a custom base image is given, then dynamically create its Dockerfile.
if (base_image) {
let lines = [];
lines.push(`FROM ${base_image}`);
lines.push("COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh");
lines.push("RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh");
console.log(`Writing custom Dockerfile to: ${dockerFile} ...`);
fs.writeFileSync(dockerFile, lines.join("\n"));
}
The action seems much more versatile if works like this, maybe we can automate the architecture name on base_image?
Using:
arch: none
distro: none
base_image: arm32v7/alpine:3.18
Still have issue,
But now is the dockerfile:
#6 [1/3] FROM docker.io/arm32v7/alpine:3.16@sha256:73ab068ac4acca454294ecb8ec50f2331953a546a60ff241b0dc5f2f42de0d25
#6 resolve docker.io/arm32v7/alpine:3.16@sha256:73ab0[68](https://github.com/RaulTrombin/navigator-lib/actions/runs/8110776985/job/22168776329#step:6:69)ac4acca454294ecb8ec50f2331953a546a60ff241b0dc5f2f42de0d25 done
#6 sha256:73ab068ac4acca454294ecb8ec50f2331953a546a60ff241b0dc5f2f42de0d25 528B / 528B done
#6 sha256:8282ae194fdb8233c361bceb846f5bf7c65e19d1ffae178af604e548a8bffeff 1.49kB / 1.49kB done
#6 sha256:3dae0518e749f9d581ecf3cda7ea260e4a87c1413599b7248a60554e9ae3a3ac 0B / 2.42MB 0.1s
#6 extracting sha256:3dae0518e749f9d581ecf3cda7ea260e4a87c1413599b[72](https://github.com/RaulTrombin/navigator-lib/actions/runs/8110776985/job/22168776329#step:6:73)48a60554e9ae3a3ac
#6 sha256:3dae0518e749f9d581ecf3cda7ea260e4a87c1413599b7248a60554e9ae3a3ac 2.42MB / 2.42MB 0.4s done
#6 extracting sha256:3dae0518e[74](https://github.com/RaulTrombin/navigator-lib/actions/runs/8110776985/job/22168776329#step:6:75)9f9d581ecf3cda7ea260e4a87c1413599b7248a60554e9ae3a3ac 0.0s done
#6 DONE 0.4s
#9 [2/3] COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
#9 DONE 0.0s
#10 [3/3] RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh
#10 0.1[81](https://github.com/RaulTrombin/navigator-lib/actions/runs/8110776985/job/22168776329#step:6:82) /bin/sh: /root/run-on-arch-install.sh: not found
#10 ERROR: process "/bin/sh -c chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh" did not complete successfully: exit code: 127
------
> importing cache manifest from ghcr.io/raultrombin/navigator-lib/run-on-arch-raultrombin-navigator-lib-test-all-targets-none-none:
------
------
> [3/3] RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh:
/root/run-on-arch-install.sh: not found
------
Dockerfile.none.none:3
--------------------
1 | FROM arm32v7/alpine:3.16
2 | COPY ./run-on-arch-install.sh /root/run-on-arch-install.sh
3 | >>> RUN chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh
--------------------
ERROR: failed to solve: process "/bin/sh -c chmod +x /root/run-on-arch-install.sh && /root/run-on-arch-install.sh" did not complete successfully: exit code: 127
Error: The process '/home/runner/work/_actions/uraimo/run-on-arch-action/v2.6.0/src/run-on-arch.sh' failed with exit code 1
@uraimo creating a fork of run-on-arch-action and rewriting the alpine version from latest to 3.18 solved my issue for now.
run-on-arch-install.sh
@uraimo sorry, this is not a action related issue, but alpine unspected behavior.
the install failled using the pip packages... and somehow it break the action.
new version fixed
install: |
apk add py3-pip
run: |
python3 -m venv ./venv
. ./venv/bin/activate
pip3 install -U pip
pip3 install package_wheel --no-index --find-links dist/ --force-reinstall
Suggestions: maybe we could fail if the install
section fail and display it(?)