/buildx

Docker CLI plugin for extended build capabilities with BuildKit

Primary LanguageGoApache License 2.0Apache-2.0

buildx

GitHub release PkgGoDev Build Status Go Report Card codecov

buildx is a Docker CLI plugin for extended build capabilities with BuildKit.

Key features:

  • Familiar UI from docker build
  • Full BuildKit capabilities with container driver
  • Multiple builder instance support
  • Multi-node builds for cross-platform images
  • Compose build support
  • High-level build constructs (bake)
  • In-container driver support (both Docker and Kubernetes)

Table of Contents

Installing

Using buildx as a docker CLI plugin requires using Docker 19.03 or newer. A limited set of functionality works with older versions of Docker when invoking the binary directly.

Windows and macOS

Docker Buildx is included in Docker Desktop for Windows and macOS.

Linux packages

Docker Linux packages also include Docker Buildx when installed using the DEB or RPM packages.

Manual download

Important

This section is for unattended installation of the buildx component. These instructions are mostly suitable for testing purposes. We do not recommend installing buildx using manual download in production environments as they will not be updated automatically with security updates.

On Windows and macOS, we recommend that you install Docker Desktop instead. For Linux, we recommend that you follow the instructions specific for your distribution.

You can also download the latest binary from the GitHub releases page.

Rename the relevant binary and copy it to the destination matching your OS:

OS Binary name Destination folder
Linux docker-buildx $HOME/.docker/cli-plugins
macOS docker-buildx $HOME/.docker/cli-plugins
Windows docker-buildx.exe %USERPROFILE%\.docker\cli-plugins

Or copy it into one of these folders for installing it system-wide.

On Unix environments:

  • /usr/local/lib/docker/cli-plugins OR /usr/local/libexec/docker/cli-plugins
  • /usr/lib/docker/cli-plugins OR /usr/libexec/docker/cli-plugins

On Windows:

  • C:\ProgramData\Docker\cli-plugins
  • C:\Program Files\Docker\cli-plugins

Note

On Unix environments, it may also be necessary to make it executable with chmod +x:

$ chmod +x ~/.docker/cli-plugins/docker-buildx

Dockerfile

Here is how to install and use Buildx inside a Dockerfile through the docker/buildx-bin image:

FROM docker
COPY --from=docker/buildx-bin /buildx /usr/libexec/docker/cli-plugins/docker-buildx
RUN docker buildx version

Set buildx as the default builder

Running the command docker buildx install sets up docker builder command as an alias to docker buildx build. This results in the ability to have docker build use the current buildx builder.

To remove this alias, run docker buildx uninstall.

Building

# Buildx 0.6+
$ docker buildx bake "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv ./bin/buildx ~/.docker/cli-plugins/docker-buildx

# Docker 19.03+
$ DOCKER_BUILDKIT=1 docker build --platform=local -o . "https://github.com/docker/buildx.git"
$ mkdir -p ~/.docker/cli-plugins
$ mv buildx ~/.docker/cli-plugins/docker-buildx

# Local 
$ git clone https://github.com/docker/buildx.git && cd buildx
$ make install

Getting started

Building with buildx

Buildx is a Docker CLI plugin that extends the docker build command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.

After installation, buildx can be accessed through the docker buildx command with Docker 19.03. docker buildx build is the command for starting a new build. With Docker versions older than 19.03 buildx binary can be called directly to access the docker buildx subcommands.

$ docker buildx build .
[+] Building 8.4s (23/32)
 => ...

Buildx will always build using the BuildKit engine and does not require DOCKER_BUILDKIT=1 environment variable for starting builds.

The docker buildx build command supports features available for docker build, including features such as outputs configuration, inline build caching, and specifying target platform. In addition, Buildx also supports new features that are not yet available for regular docker build like building manifest lists, distributed caching, and exporting build results to OCI image tarballs.

Buildx is supposed to be flexible and can be run in different configurations that are exposed through a driver concept. Currently, we support:

  • a docker driver that uses the BuildKit library bundled into the Docker daemon binary,
  • a docker-container driver that automatically launches BuildKit inside a Docker container,
  • a kubernetes driver to spin up pods with defined BuildKit container image to build your images.
  • a remote driver to connect to manually provisioned and managed buildkitd instances.

We plan to add more drivers in the future.

The user experience of using buildx is very similar across drivers, but there are some features that are not currently supported by the docker driver, because the BuildKit library bundled into docker daemon currently uses a different storage component. In contrast, all images built with docker driver are automatically added to the docker images view by default, whereas when using other drivers the method for outputting an image needs to be selected with --output.

Working with builder instances

By default, buildx will initially use the docker driver if it is supported, providing a very similar user experience to the native docker build. Note that you must use a local shared daemon to build your applications.

Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.

You can create new instances using the docker buildx create command. This creates a new builder instance with a single node based on your current configuration.

To use a remote node you can specify the DOCKER_HOST or the remote context name while creating the new builder. After creating a new instance, you can manage its lifecycle using the docker buildx inspect, docker buildx stop, and docker buildx rm commands. To list all available builders, use buildx ls. After creating a new builder you can also append new nodes to it.

To switch between different builders, use docker buildx use <name>. After running this command, the build commands will automatically use this builder.

Docker also features a docker context command that can be used for giving names for remote Docker API endpoints. Buildx integrates with docker context so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.

Building multi-platform images

BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.

When you invoke a build, you can set the --platform flag to specify the target platform for the build output, (for example, linux/amd64, linux/arm64, or darwin/amd64).

When the current builder instance is backed by the docker-container or kubernetes driver, you can specify multiple platforms together. In this case, it builds a manifest list which contains images for all specified architectures. When you use this image in docker run or docker service, Docker picks the correct image based on the node's platform.

You can build multi-platform images using three different strategies that are supported by Buildx and Dockerfiles:

  1. Using the QEMU emulation support in the kernel
  2. Building on multiple native nodes using the same builder instance
  3. Using a stage in Dockerfile to cross-compile to different architectures

QEMU is the easiest way to get started if your node already supports it (for example. if you are using Docker Desktop). It requires no changes to your Dockerfile and BuildKit automatically detects the secondary architectures that are available. When BuildKit needs to run a binary for a different architecture, it automatically loads it through a binary registered in the binfmt_misc handler.

For QEMU binaries registered with binfmt_misc on the host OS to work transparently inside containers they must be registered with the fix_binary flag. This requires a kernel >= 4.8 and binfmt-support >= 2.1.7. You can check for proper registration by checking if F is among the flags in /proc/sys/fs/binfmt_misc/qemu-*. While Docker Desktop comes preconfigured with binfmt_misc support for additional platforms, for other installations it likely needs to be installed using tonistiigi/binfmt image.

$ docker run --privileged --rm tonistiigi/binfmt --install all

Using multiple native nodes provide better support for more complicated cases that are not handled by QEMU and generally have better performance. You can add additional nodes to the builder instance using the --append flag.

Assuming contexts node-amd64 and node-arm64 exist in docker context ls;

$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .

Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with --platform using the native architecture of the build node. A list of build arguments like BUILDPLATFORM and TARGETPLATFORM is available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.

FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log

You can also use tonistiigi/xx Dockerfile cross-compilation helpers for more advanced use-cases.

High-level build options

Buildx also aims to provide support for high-level build concepts that go beyond invoking a single build command. We want to support building all the images in your application together and let the users define project specific reusable build flows that can then be easily invoked by anyone.

BuildKit efficiently handles multiple concurrent build requests and de-duplicating work. The build commands can be combined with general-purpose command runners (for example, make). However, these tools generally invoke builds in sequence and therefore cannot leverage the full potential of BuildKit parallelization, or combine BuildKit’s output for the user. For this use case, we have added a command called docker buildx bake.

The bake command supports building images from compose files, similar to docker-compose build, but allowing all the services to be built concurrently as part of a single request.

There is also support for custom build rules from HCL/JSON files allowing better code reuse and different target groups. The design of bake is in very early stages and we are looking for feedback from users.

Contributing

Want to contribute to Buildx? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md