/muslrust

Docker environment for building musl based static linux rust binaries

Primary LanguageRustMIT LicenseMIT

muslrust

nightly stable docker pulls

A docker environment for building static rust binaries for x86_64 linux environments using musl. Built daily via github actions.

Binaries compiled with muslrust are light-weight, call straight into the kernel without other system library dependencies, can be shipped to most linux distributions without compatibility issues, and can be inserted into lightweight docker images such as static distroless, scratch, or alpine without further installs.

The goal is to simplify the creation of small and efficient cloud containers, or stand-alone linux binary releases.

This image includes popular C libraries compiled with musl-gcc, enabling static builds even when these libraries are used.

Usage

Pull and run from a rust project root:

docker pull clux/muslrust:stable
docker run -v $PWD:/volume --rm -t clux/muslrust:stable cargo build --release

You should have a static executable in the target folder:

ldd target/x86_64-unknown-linux-musl/release/EXECUTABLE
        not a dynamic executable

Examples

The binaries and images for small apps generally end up around ~6MB compressed or ~20MB uncompressed without stripping.

The recommended production image is static distroless because it avoids you dealing with below SSL issues (common with scratch), and it disallows shelling in via kubectl exec (use alpine if you want this).

Available Tags

The standard tags are :stable or a dated :nightly-{YYYY-mm-dd}.

For pinned, or historical builds, see the available tags on dockerhub.

C Libraries

The following system libraries are compiled against musl-gcc:

We try to keep these up to date.

Developing

Clone, tweak, build, and run tests:

git clone git@github.com:clux/muslrust.git && cd muslrust
just build
just test

Tests

Before we push a new version of muslrust we test to ensure that we can use and statically link:

Caching

Local Volume Caches

Repeat builds locally are always from scratch (thus slow) without a cached cargo directory. You can set up a docker volume by just adding -v cargo-cache:/root/.cargo/registry to the docker run command.

You'll have an extra volume that you can inspect with docker volume inspect cargo-cache.

Suggested developer usage is to add the following function to your ~/.bashrc:

musl-build() {
  docker run \
    -v cargo-cache:/root/.cargo/registry \
    -v "$PWD:/volume" \
    --rm -it clux/muslrust cargo build --release
}

Then use in your project:

$ cd myproject
$ musl-build
    Finished release [optimized] target(s) in 0.0 secs

Caching on CI

On CI, you need to find a way to either store the cargo-cache referenced above, or rely on docker layer caches with layers (see cargo-chef).

Github Actions

Github actions supports both methods:

CircleCI

CircleCI supports both methods:

Troubleshooting

SSL Verification

You might need to point openssl at the location of your certificates explicitly to avoid certificate errors on https requests.

export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
export SSL_CERT_DIR=/etc/ssl/certs

You can also hardcode this in your binary, or, more sensibly set it in your running docker image. The openssl-probe crate can be also be used to detect where these reside. If you use distroless:static, you can avoid this.

Diesel and PQ builds

Works with the older version of libpq we bundle (see #81). See the test/dieselpgcrate for specifics.

For stuff like infer_schema! to work you need to explicitly pass on -e DATABASE_URL=$DATABASE_URL to the docker run. It's probably easier to just make diesel print-schema > src/schema.rs part of your migration setup though.

Note that diesel compiles with openssl statically since 1.34.0, so you need to include the openssl crate before diesel due to pq-sys#25:

extern crate openssl;
#[macro_use] extern crate diesel;

This is true even if you connect without sslmode=require.

Filesystem permissions on local builds

When building locally, the permissions of the musl parts of the ./target artifacts dir will be owned by root and requires sudo rm -rf target/ to clear. This is an intended complexity tradeoff with user builds.

Debugging in blank containers

If you are running a plain alpine/scratch container with your musl binary in there, then you might need to compile with debug symbols, and set ENV RUST_BACKTRACE=full in your Dockerfile.

In alpine, if this doesn't work (or fails to give you line numbers), try installing the rust package (via apk). This should not be necessary anymore though!

For easily grabbing backtraces from rust docker apps; try adding sentry. It seems to be able to grab backtraces regardless of compile options/evars.

SELinux

On SELinux enabled systems like Fedora, you will need to configure selinux labels. E.g. adding the :Z or :z flags where appropriate: -v $PWD:/volume:Z.

Extending

Extra C libraries

If you need extra C libraries, you can follow the builder pattern approach via e.g. rfcbot-rs's Dockerfile and add extra curl -> make instructions. We are unlikely to include other C libraries herein unless they are very popular.

Extra Rustup components

You can install extra components distributed via Rustup like normal:

rustup component add clippy

Binaries distributed via Cargo

If you need to install a binary crate such as ripgrep on a CI build image, you need to build it against the GNU toolchain (see #37):

CARGO_BUILD_TARGET=x86_64-unknown-linux-gnu cargo install ripgrep

Alternatives