A GitHub Action to easily set up and use a chroot-based [1] Alpine Linux environment in your workflows and emulate any supported CPU architecture (using QEMU).
runs-on: ubuntu-latest
steps:
- uses: jirutka/setup-alpine@v1
with:
branch: v3.15
- run: cat /etc/alpine-release
shell: alpine.sh {0}
See more examples…
-
Easy to use and flexible
-
Add one step
uses: jirutka/setup-alpine@v1
to set up the Alpine environment, for the next steps that should run in this environment, specifyshell: alpine.sh {0}
. See Usage examples for more. -
You can switch between one, or even more, Alpine environments and the host system (Ubuntu) within a single job (i.e. each step can run in a different environment). This is ideal, for example, for cross-compilations.
-
-
Emulation of non-x86 architectures
-
This couldn’t be easier; just specify input parameter arch. The action sets up QEMU user space emulator and installs Alpine Linux environment for the specified architecture. You can then build and run binaries for/from this architecture, just like on real hardware, but (significantly) slower (it’s a software emulation after all).
-
-
No hassle with Docker images
-
You don’t have to write any Dockerfiles, for example, to cross-compile Rust crates with C dependencies.[2]
-
No, you really don’t need any so-called “official” Docker image for gcc, nodejs, python or whatever you need… just install it using apk from Alpine packages. It is fast, really fast!
-
-
Always up to date environment
-
The whole environment, all packages, are always installed directly from the Alpine Linux’s official repositories using apk-tools (Alpine’s package manager). There’s no intermediate layer that tends to lang behind with security fixes (such as Docker images). — You might be thinking, isn’t that slow? No, it’s faster than pulling a Docker image!
-
No, you really don’t need any Docker image to get a stable build environment. Alpine Linux provides stable releases (branches); these get just (security) fixes, no breaking changes.
-
-
It’s simple and lightweight
-
You don’t have to worry about that on a hosted CI service, but still… This action is written in ~220 LoC and uses only basic Unix tools (chroot, mount, wget, Bash, …) and apk-tools (Alpine’s package manager). That’s it.
-
- apk-tools-url
-
URL of the apk-tools static binary to use. It must end with
#!sha256!
followed by a SHA-256 hash of the file. This should normally be left at the default value.Default: see action.yml
- arch
-
CPU architecture to emulate using QEMU user space emulator. Allowed values are:
x86_64
(native),x86
(native),aarch64
,armhf
[3],armv7
,ppc64le
,riscv64
[4], ands390x
.Default:
x86_64
- branch
-
Alpine branch (aka release) to install:
vMAJOR.MINOR
,latest-stable
, oredge
.Example:
v3.15
Default:latest-stable
- extra-keys
-
A list of paths of additional trusted keys (for installing packages from the extra-repositories) to copy into /etc/apk/keys/. The paths should be relative to the workspace directory (the default location of your repository when using the checkout action).
Example:
.keys/pkgs@example.org-56d0d9fd.rsa.pub
- extra-repositories
-
A list of additional Alpine repositories to add into /etc/apk/repositories (Alpine’s official main and community repositories are always added).
- mirror-url
-
URL of an Alpine Linux mirror to fetch packages from.
Default:
http://dl-cdn.alpinelinux.org/alpine
- packages
-
A list of Alpine packages to install.
Example:
build-base openssh-client
Default: no extra packages - shell-name
-
Name of the script to run
sh
in the Alpine chroot that will be added toGITHUB_PATH
. This name should be used injobs.<job_id>.steps[*].shell
(e.g.shell: alpine.sh {0}
) to run the step’s script in the chroot.Default:
alpine.sh
- volumes
-
A list of directories on the host system to mount bind into the chroot. You can specify the source and destination path:
<src-dir>:<dest-dir>
. The<src-dir>
is an absolute path of existing directory on the host system,<dest-dir>
is an absolute path in the chroot (it will be created if doesn’t exist). You can omit the latter if they’re the same.Please note that /home/runner/work (where’s your workspace located) is always mounted, don’t specify it here.
Example:
${{ steps.alpine-aarch64.outputs.root-path }}:/mnt/alpine-aarch64
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup latest Alpine Linux
uses: jirutka/setup-alpine@v1
- name: Run script inside Alpine chroot as root
run: |
cat /etc/alpine-release
apk add nodejs npm
shell: alpine.sh --root {0}
- name: Run script inside Alpine chroot as the default user (unprivileged)
run: |
ls -la # as you would expect, you're in your workspace directory
npm build
shell: alpine.sh {0}
- name: Run script on the host system (Ubuntu)
run: |
cat /etc/os-release
shell: bash
- uses: jirutka/setup-alpine@v1
with:
branch: v3.15
packages: >
build-base
libgit2-dev
meson
runs-on: ubuntu-latest
steps:
- name: Setup Alpine Linux v3.15 for aarch64
uses: jirutka/setup-alpine@v1
with:
arch: aarch64
branch: v3.15
- name: Run script inside Alpine chroot with aarch64 emulation
run: uname -m
shell: alpine.sh {0}
- uses: jirutka/setup-alpine@v1
with:
extra-repositories: |
http://dl-cdn.alpinelinux.org/alpine/edge/testing
packages: some-pkg-from-testing
runs-on: ubuntu-latest
steps:
- name: Setup latest Alpine Linux for x86_64
uses: jirutka/setup-alpine@v1
with:
shell-name: alpine-aarch64.sh
- name: Setup latest Alpine Linux for aarch64
uses: jirutka/setup-alpine@v1
with:
arch: aarch64
shell-name: alpine-aarch64.sh
- name: Run script inside Alpine chroot
run: uname -m
shell: alpine-x86_64.sh {0}
- name: Run script inside Alpine chroot with aarch64 emulation
run: uname -m
shell: alpine-aarch64.sh {0}
- name: Run script on the host system (Ubuntu)
run: cat /etc/os-release
shell: bash
runs-on: ubuntu-latest
strategy:
matrix:
include:
- rust-target: aarch64-unknown-linux-musl
os-arch: aarch64
env:
CROSS_SYSROOT: /mnt/alpine-${{ matrix.os-arch }}
steps:
- uses: actions/checkout@v1
- name: Set up Alpine Linux for ${{ matrix.os-arch }} (target arch)
id: alpine-target
uses: jirutka/setup-alpine@v1
with:
arch: ${{ matrix.os-arch }}
branch: edge
packages: >
dbus-dev
dbus-static
shell-name: alpine-target.sh
- name: Set up Alpine Linux for x86_64 (build arch)
uses: jirutka/setup-alpine@v1
with:
arch: x86_64
packages: >
build-base
pkgconf
lld
rustup
volumes: ${{ steps.alpine-target.outputs.root-path }}:${{ env.CROSS_SYSROOT }}
shell-name: alpine.sh
- name: Install Rust stable toolchain via rustup
run: rustup-init --target ${{ matrix.rust-target }} --default-toolchain stable --profile minimal -y
shell: alpine.sh {0}
- name: Build statically linked binary
env:
CARGO_BUILD_TARGET: ${{ matrix.rust-target }}
CARGO_PROFILE_RELEASE_STRIP: symbols
PKG_CONFIG_ALL_STATIC: '1'
PKG_CONFIG_LIBDIR: ${{ env.CROSS_SYSROOT }}/usr/lib/pkgconfig
RUSTFLAGS: -C linker=/usr/bin/ld.lld
SYSROOT: /dummy # workaround for https://github.com/rust-lang/pkg-config-rs/issues/102
run: |
# Workaround for https://github.com/rust-lang/pkg-config-rs/issues/102.
echo -e '#!/bin/sh\nPKG_CONFIG_SYSROOT_DIR=${{ env.CROSS_SYSROOT }} exec pkgconf "$@"' \
| install -m755 /dev/stdin pkg-config
export PKG_CONFIG="$(pwd)/pkg-config"
cargo build --release --locked --verbose
shell: alpine.sh {0}
- name: Try to run the binary
run: ./myapp --version
working-directory: target/${{ matrix.rust-target }}/release
shell: alpine-target.sh {0}
This action is an evolution of the alpine-chroot-install script I originally wrote for Travis CI in 2016. The implementation is principally the same, but tailored to GitHub Actions. It’s so simple and fast thanks to how awesome apk-tools is!
This project is licensed under MIT License. For the full text of the license, see the LICENSE file.
edge
for now.