blue-build/cli

bug: Fedora distrobox image doesn't setup userspace with docker properly

Opened this issue · 12 comments

Current Behavior

The last several times I have run a successful bluebuild build <recipe> command, the process ends with something like following behavior (in this case, the image name is combined-nvidia):

[2024-03-29T20:49:37Z INFO  blue_build::commands::build] --> 29330c35fdcc
[2024-03-29T20:49:37Z INFO  blue_build::commands::build] [5/5] STEP 29/29: LABEL "org.blue-build.build-id"="3fcf8459-9de7-4929-b421-2f731770a1b4"
[2024-03-29T20:49:37Z INFO  blue_build::commands::build] [5/5] COMMIT combined-nvidia:local-39
[2024-03-29T20:49:38Z INFO  blue_build::commands::build] --> c4466ebead8a
[2024-03-29T20:49:38Z INFO  blue_build::commands::build] Successfully tagged localhost/combined-nvidia:local-39
[2024-03-29T20:49:50Z INFO  blue_build::commands::build] c4466ebead8a6ea9bf54f98f5c88184f6a735dda63024f3b0c0749441f0baa70



(press <Enter> as much as you like, wait as long as you like, no changes)


The terminal hangs at this point without the process completing. On pressing <CTRL-C>:

^C[2024-03-29T20:59:33Z INFO  blue_build::commands::build] Recieved SIGINT, cleaning up build...

Expected Behavior

The bluebuild build command should exit gracefully (return exit 0 or similar) and restore control of the shell to the user.

Additional context/Screenshots

The report below was generated from inside of the bluebuild cli distrobox image using bluebuild bug-report.

Possible Solution

N/A

Environment

  • Blue Build Version: 0.8.1
  • Operating system: Fedora 38.0.0
  • Branch/Tag: ()
  • Git Commit Hash:

Shell

  • Name: bash
  • Version: GNU bash, version 5.2.26(1)-release (x86_64-redhat-linux-gnu)
    Copyright (C) 2022 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

  • Terminal emulator:

Rust

  • Rust Version: rustc 1.76.0 (07dca489a 2024-02-04)
  • Rust channel: 1.76.0-x86_64-unknown-linux-gnu release
  • Build Time: 2024-02-26 14:49:06 +00:00

Recipe:

This issue has happened on all of my custom recipes, but here is one particular example:

name: sway
description: sway image based on sericiea
base_image: ghcr.io/ublue-os/sericea-main
image_version: '39'
blue_build_tag: null
modules:
- type: files
  files:
  - usr: /usr
- type: akmods
  install: null
- type: rpm-ostree
  repos:
  - https://pkgs.tailscale.com/stable/fedora/tailscale.repo
  - https://copr.fedorainfracloud.org/coprs/tofik/nwg-shell/repo/fedora-%OS_VERSION%/tofik-nwg-shell-fedora-%OS_VERSION%.repo
  - https://copr.fedorainfracloud.org/coprs/solopasha/hyprland/repo/fedora-%OS_VERSION%/solopasha-hyprland-fedora-%OS_VERSION%.repo
  install:
  - sddm
  - sddm-themes
  - nwg-shell
  - aylurs-gtk-shell
  - swww
  - waypaper
  - adwaita-qt5
  - gnome-themes-extra
  - gnome-icon-theme
  - paper-icon-theme
  - breeze-icon-theme
  - papirus-icon-theme
  - fuzzel
  - xorg-x11-server-Xwayland
  - polkit
  - lxpolkit
  - xdg-user-dirs
  - dbus-tools
  - dbus-daemon
  - wl-clipboard
  - gnome-keyring
  - pavucontrol
  - playerctl
  - qt5ct
  - qt5-qtwayland
  - qt6-qtwayland
  - xlsclients
  - vulkan-validation-layers
  - vulkan-tools
  - google-noto-emoji-fonts
  - gnome-disk-utility
  - fcitx5
  - wireplumber
  - pipewire
  - pamixer
  - mpd
  - ncmpcpp
  - sox
  - network-manager-applet
  - NetworkManager-openvpn
  - NetworkManager-openconnect
  - bluez
  - bluez-tools
  - blueman
  - thunar
  - thunar-archive-plugin
  - thunar-volman
  - xarchiver
  - imv
  - p7zip
  - unrar-free
  - gvfs-smb
  - dolphin
  - slurp
  - grim
  - wf-recorder
  - wlr-randr
  - wlsunset
  - grimshot
  - light
  - swaybg
  - swaylock
  - swayidle
  - kanshi
  - kitty
  - foot
  - xfce4-terminal
  - mpv
  - tailscale
  - tmux
  - screen
  - pass
  - pass-otp
  - qemu-kvm
  - libvirt-daemon
  - libvirt-daemon-config-network
  - libvirt-daemon-driver-interface
  - libvirt-daemon-driver-network
  - libvirt-daemon-driver-nwfilter
  - libvirt-daemon-driver-qemu
  - libvirt-daemon-driver-secret
  - libvirt-daemon-driver-storage-core
  - libvirt-daemon-driver-storage-disk
  - libvirt-daemon-driver-storage-scsi
  - libvirt-daemon-kvm
  - libvirt-client
  - virt-install
  - virt-manager
- type: fonts
  fonts:
    nerd-fonts:
    - Iosevka
    - FiraCode
    - Hack
    - SourceCodePro
    - Terminus
    - JetBrainsMono
    - NerdFontsSymbolsOnly
- type: script
  scripts:
  - settheming.sh
- type: default-flatpaks
  user:
    install:
    - org.gtk.Gtk3theme.adw-gtk3
    - org.gtk.Gtk3theme.adw-gtk3-dark
    - org.mozilla.firefox
- type: signing
- type: systemd
  system:
    enabled:
    - tailscaled.service
    - libvirtd.service
- type: rpm-ostree
  repos:
  - https://copr.fedorainfracloud.org/coprs/tofik/sway/repo/fedora-%OS_VERSION%/tofik-sway-fedora-%OS_VERSION%.repo
  install:
  - swaync

Yeah you somehow got the old nightly build. There was an issue where the wrong binary got packaged into the image. I thought I got rid of that image. Mind trying to install the latest version? We're on v0.8.3 now and that particular code no longer exists cause I kept running into that specific issue.

Thanks @gmpinder , trying that now.

Might be something I'm doing wrong, but I'm getting a different error now on v.0.8.3:

ERROR: use `docker --context=default buildx` to switch to context "default"

The bluebuild template command works just fine, though it doesn't give me any hooks for changing the docker --context (see end of comment).

Full bluebuild build -v output:

$ bluebuild build -v ./config/recipe-sway-nvidia.yml
[19:04:42  INFO] => Templating for recipe at ./config/recipe-sway-nvidia.yml
[19:04:42 DEBUG] => Deserializing recipe
[19:04:42 DEBUG] => Recipe::parse_recipe(/var/home/michael/repositories/bluebuild/config/recipe-sway-nvidia.yml)
[19:04:42 DEBUG] => Recipe contents: # image will be published to ghcr.io/<user>/<name>
name: sway-nvidia
# description will be included in the image's metadata
description: sway-nvidia image based on sericiea

# the base image to build on top of (FROM) and the version tag to use
base-image: ghcr.io/ublue-os/sericea-nvidia
image-version: 39 # latest is also supported if you want new updates ASAP

# module configuration, executed in order
# you can include multiple instances of the same module
modules:
  - from-file: common-files.yml
  - from-file: common-akmods.yml
  - from-file: common-packages.yml
  - from-file: common-fonts.yml
  - from-file: common-scripts.yml
  - from-file: common-flatpaks.yml
  - type: signing
  - from-file: common-systemd.yml
  - from-file: sway-packages.yml

[19:04:42  INFO] => Retrieving OS version from ghcr.io/ublue-os/sericea-nvidia:39. This might take a bit
[19:04:42 DEBUG] => Checking if skopeo exists
[19:04:42 DEBUG] => Command skopeo does exist
[19:04:42 DEBUG] => Checking if docker exists
[19:04:42 DEBUG] => Command docker does exist
[19:04:42 DEBUG] => Checking if podman exists
[19:04:42 DEBUG] => Command podman does exist
[19:05:17 DEBUG] => Successfully inspected image docker://ghcr.io/ublue-os/sericea-nvidia:39!
[19:05:17 DEBUG] => Templating to file Containerfile
[19:05:17  INFO] => Finished templating Containerfile
[19:05:17  INFO] => Building image for recipe at ./config/recipe-sway-nvidia.yml
[19:05:17 DEBUG] => Recipe::parse_recipe(/var/home/michael/repositories/bluebuild/config/recipe-sway-nvidia.yml)
[19:05:17 DEBUG] => Recipe contents: # image will be published to ghcr.io/<user>/<name>
name: sway-nvidia
# description will be included in the image's metadata
description: sway-nvidia image based on sericiea

# the base image to build on top of (FROM) and the version tag to use
base-image: ghcr.io/ublue-os/sericea-nvidia
image-version: 39 # latest is also supported if you want new updates ASAP

# module configuration, executed in order
# you can include multiple instances of the same module
modules:
  - from-file: common-files.yml
  - from-file: common-akmods.yml
  - from-file: common-packages.yml
  - from-file: common-fonts.yml
  - from-file: common-scripts.yml
  - from-file: common-flatpaks.yml
  - type: signing
  - from-file: common-systemd.yml
  - from-file: sway-packages.yml

[19:05:17 DEBUG] => Found cached 39 for ghcr.io/ublue-os/sericea-nvidia:39
[19:05:17  WARN] => Running locally
[19:05:17 DEBUG] => Finished generating tags!
[19:05:17 DEBUG] => Tags: [
    "local-39",
]
[19:05:17  INFO] => Generating full image name
[19:05:17 DEBUG] => Using image name 'sway-nvidia'
[19:05:17 DEBUG] => Checking if docker exists
[19:05:17 DEBUG] => Command docker does exist
[19:05:17 DEBUG] => Checking if podman exists
[19:05:17 DEBUG] => Command podman does exist
[19:05:17 DEBUG] => Checking if buildah exists
[19:05:17 DEBUG] => Command buildah does exist
ERROR: use `docker --context=default buildx` to switch to context "default"
[19:05:17 ERROR] => Failed to build image

Full bluebuild template -v output:

FROM scratch as stage-config
COPY ./config /config

# Copy modules
# The default modules are inside blue-build/modules
# Custom modules overwrite defaults
FROM scratch as stage-modules
COPY --from=ghcr.io/blue-build/modules:latest /modules /modules
COPY ./modules /modules

# Bins to install
# These are basic tools that are added to all images.
# Generally used for the build process. We use a multi
# stage process so that adding the bins into the image
# can be added to the ostree commits.
FROM scratch as stage-bins

COPY --from=gcr.io/projectsigstore/cosign /ko-app/cosign /bins/cosign
COPY --from=docker.io/mikefarah/yq /usr/bin/yq /bins/yq
COPY --from=ghcr.io/blue-build/cli:latest-installer /out/bluebuild /bins/bluebuild

# Keys for pre-verified images
# Used to copy the keys into the final image
# and perform an ostree commit.
#
# Currently only holds the current image's
# public key.
FROM scratch as stage-keys
COPY cosign.pub /keys/sway-nvidia.pub
FROM scratch as stage-akmods-main
COPY --from=ghcr.io/ublue-os/akmods:main-39 /rpms /rpms

FROM ghcr.io/ublue-os/sericea-nvidia:39

LABEL org.blue-build.build-id="3c9d08f3-b6c0-41b9-a95a-3d1582a253b8"
LABEL org.opencontainers.image.title="sway-nvidia"
LABEL org.opencontainers.image.description="sway-nvidia image based on sericiea"
LABEL io.artifacthub.package.readme-url=https://raw.githubusercontent.com/blue-build/cli/main/README.md

ARG RECIPE=./config/recipe-sway-nvidia.yml
ARG IMAGE_REGISTRY=localhost

ARG CONFIG_DIRECTORY="/tmp/config"
ARG IMAGE_NAME="sway-nvidia"
ARG BASE_IMAGE="ghcr.io/ublue-os/sericea-nvidia"

# Key RUN
RUN --mount=type=bind,from=stage-keys,src=/keys,dst=/tmp/keys \
  mkdir -p /usr/etc/pki/containers/ \
  && cp /tmp/keys/* /usr/etc/pki/containers/ \
  && ostree container commit

# Bin RUN
RUN --mount=type=bind,from=stage-bins,src=/bins,dst=/tmp/bins \
  mkdir -p /usr/bin/ \
  && cp /tmp/bins/* /usr/bin/ \
  && ostree container commit

# Module RUNs
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Files module ==========" \
  && chmod +x /tmp/modules/files/files.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/files/files.sh '{"type":"files","files":[{"usr":"/usr"}]}' \
  && echo "========== End Files module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=stage-akmods-main,src=/rpms,dst=/tmp/rpms,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Akmods module ==========" \
  && chmod +x /tmp/modules/akmods/akmods.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/akmods/akmods.sh '{"type":"akmods","install":null}' \
  && echo "========== End Akmods module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Rpm-ostree module ==========" \
  && chmod +x /tmp/modules/rpm-ostree/rpm-ostree.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/rpm-ostree/rpm-ostree.sh '{"type":"rpm-ostree","repos":["https://pkgs.tailscale.com/stable/fedora/tailscale.repo","https://copr.fedorainfracloud.org/coprs/tofik/nwg-shell/repo/fedora-%OS_VERSION%/tofik-nwg-shell-fedora-%OS_VERSION%.repo","https://copr.fedorainfracloud.org/coprs/solopasha/hyprland/repo/fedora-%OS_VERSION%/solopasha-hyprland-fedora-%OS_VERSION%.repo"],"install":["sddm","sddm-themes","nwg-shell","aylurs-gtk-shell","swww","waypaper","adwaita-qt5","gnome-themes-extra","gnome-icon-theme","paper-icon-theme","breeze-icon-theme","papirus-icon-theme","fuzzel","xorg-x11-server-Xwayland","polkit","lxpolkit","xdg-user-dirs","dbus-tools","dbus-daemon","wl-clipboard","gnome-keyring","pavucontrol","playerctl","qt5ct","qt5-qtwayland","qt6-qtwayland","xlsclients","vulkan-validation-layers","vulkan-tools","google-noto-emoji-fonts","gnome-disk-utility","fcitx5","wireplumber","pipewire","pamixer","mpd","ncmpcpp","sox","network-manager-applet","NetworkManager-openvpn","NetworkManager-openconnect","bluez","bluez-tools","blueman","thunar","thunar-archive-plugin","thunar-volman","xarchiver","imv","p7zip","unrar-free","gvfs-smb","dolphin","slurp","grim","wf-recorder","wlr-randr","wlsunset","grimshot","light","swaybg","swaylock","swayidle","kanshi","kitty","foot","xfce4-terminal","mpv","tailscale","tmux","screen","pass","pass-otp","qemu-kvm","libvirt-daemon","libvirt-daemon-config-network","libvirt-daemon-driver-interface","libvirt-daemon-driver-network","libvirt-daemon-driver-nwfilter","libvirt-daemon-driver-qemu","libvirt-daemon-driver-secret","libvirt-daemon-driver-storage-core","libvirt-daemon-driver-storage-disk","libvirt-daemon-driver-storage-scsi","libvirt-daemon-kvm","libvirt-client","virt-install","virt-manager"]}' \
  && echo "========== End Rpm-ostree module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Fonts module ==========" \
  && chmod +x /tmp/modules/fonts/fonts.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/fonts/fonts.sh '{"type":"fonts","fonts":{"nerd-fonts":["Iosevka","FiraCode","Hack","SourceCodePro","Terminus","JetBrainsMono","NerdFontsSymbolsOnly"]}}' \
  && echo "========== End Fonts module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Script module ==========" \
  && chmod +x /tmp/modules/script/script.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/script/script.sh '{"type":"script","scripts":["settheming.sh"]}' \
  && echo "========== End Script module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Default-flatpaks module ==========" \
  && chmod +x /tmp/modules/default-flatpaks/default-flatpaks.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/default-flatpaks/default-flatpaks.sh '{"type":"default-flatpaks","user":{"install":["org.gtk.Gtk3theme.adw-gtk3","org.gtk.Gtk3theme.adw-gtk3-dark","org.mozilla.firefox"]}}' \
  && echo "========== End Default-flatpaks module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Signing module ==========" \
  && chmod +x /tmp/modules/signing/signing.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/signing/signing.sh '{"type":"signing"}' \
  && echo "========== End Signing module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Systemd module ==========" \
  && chmod +x /tmp/modules/systemd/systemd.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/systemd/systemd.sh '{"type":"systemd","system":{"enabled":["tailscaled.service","libvirtd.service"]}}' \
  && echo "========== End Systemd module ==========" \
  && ostree container commit
RUN \
  --mount=type=tmpfs,target=/var \
  --mount=type=bind,from=stage-config,src=/config,dst=/tmp/config,rw \
  --mount=type=bind,from=stage-modules,src=/modules,dst=/tmp/modules,rw \
  --mount=type=bind,from=ghcr.io/blue-build/cli:exports,src=/exports.sh,dst=/tmp/exports.sh \
  --mount=type=cache,dst=/var/cache/rpm-ostree,id=rpm-ostree-cache-sway-nvidia-39,sharing=locked \
  echo "========== Start Rpm-ostree module ==========" \
  && chmod +x /tmp/modules/rpm-ostree/rpm-ostree.sh \
  && source /tmp/exports.sh \
  && /tmp/modules/rpm-ostree/rpm-ostree.sh '{"type":"rpm-ostree","repos":["https://copr.fedorainfracloud.org/coprs/tofik/sway/repo/fedora-%OS_VERSION%/tofik-sway-fedora-%OS_VERSION%.repo"],"install":["swaync"]}' \
  && echo "========== End Rpm-ostree module ==========" \
  && ostree container commit

Mind using trace logging with -vv? I've not seen this error before. The trace logs should show the exact args being used for docker

EDIT: everything before docker_driver redacted for brevity

$ bluebuild build -vv ./config/recipe-sway-nvidia.yml
...
[19:21:54 INFO  blue_build::commands::build:274] => Generating full image name
[19:21:54 TRACE blue_build::commands::build:318] => Nothing to indicate an image name with a registry
[19:21:54 DEBUG blue_build::commands::build:326] => Using image name 'sway-nvidia'
[19:21:54 TRACE blue_build::drivers:234] => Driver::get_build_driver()
[19:21:54 TRACE blue_build::drivers:309] => Driver::determine_build_driver()
[19:21:54 TRACE blue_build_utils:18] => check_command_exists(docker)
[19:21:54 DEBUG blue_build_utils:19] => Checking if docker exists
[19:21:54 TRACE blue_build_utils:21] => which docker
[19:21:54 DEBUG blue_build_utils:28] => Command docker does exist
[19:21:54 TRACE blue_build_utils:18] => check_command_exists(podman)
[19:21:54 DEBUG blue_build_utils:19] => Checking if podman exists
[19:21:54 TRACE blue_build_utils:21] => which podman
[19:21:54 DEBUG blue_build_utils:28] => Command podman does exist
[19:21:54 TRACE blue_build_utils:18] => check_command_exists(buildah)
[19:21:54 DEBUG blue_build_utils:19] => Checking if buildah exists
[19:21:54 TRACE blue_build_utils:21] => which buildah
[19:21:54 DEBUG blue_build_utils:28] => Command buildah does exist
[19:21:54 TRACE blue_build::drivers::docker_driver:133] => DockerDriver::build_tag_push(BuildTagPushOpts {
    image: Some(
        "sway-nvidia",
    ),
    archive_path: None,
    tags: [
        "local-39",
    ],
    push: false,
    no_retry_push: true,
    retry_count: 1,
    compression: Gzip,
})

[19:21:54 TRACE blue_build::drivers::docker_driver:137] => docker buildx build -f Containerfile
[19:21:54 TRACE blue_build::drivers::docker_driver:163] => -t sway-nvidia:local-39
[19:21:54 TRACE blue_build::drivers::docker_driver:175] => --builder default
[19:21:54 TRACE blue_build::drivers::docker_driver:189] => .
ERROR: use `docker --context=default buildx` to switch to context "default"
[19:21:54 ERROR blue_build::commands:27] => Failed to build image

Ah, my day has come full circle...

I ran docker context use default within the blue-build distrobox. Now, I'm getting:

$ bluebuild build -vv ./config/recipe-sway-nvidia.yml

...(everything before docker_driver redacted) ...

[19:28:42 TRACE blue_build::drivers::docker_driver:133] => DockerDriver::build_tag_push(BuildTagPushOpts {
    image: Some(
        "sway-nvidia",
    ),
    archive_path: None,
    tags: [
        "local-39",
    ],
    push: false,
    no_retry_push: true,
    retry_count: 1,
    compression: Gzip,
})
[19:28:42 TRACE blue_build::drivers::docker_driver:137] => docker buildx build -f Containerfile
[19:28:42 TRACE blue_build::drivers::docker_driver:163] => -t sway-nvidia:local-39
[19:28:42 TRACE blue_build::drivers::docker_driver:175] => --builder default
[19:28:42 TRACE blue_build::drivers::docker_driver:189] => .
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[19:28:42 ERROR blue_build::commands:27] => Failed to build image

@gmpinder One idea, is there a way to force bluebuild to use podman instead of docker? Today has left me confused as shit about how docker is supposed to work in the ublueos ecosystem, I've just been using podman/distrobox for everything.

Edit: Sorry for the ping, it's not that big of a rush. Thanks for the quick response/help.

Right now there isn't but we do have an issue for adding that in #143

You could also try the alpine image v0.8.3-alpine as I don't install docker on that.

I've also rebuilt and pushed the v0.8.1 images so that other users don't run into that awful nightly build bug

Thanks, that's a good idea. Trying the alpine image, it looks like it's working (though, I needed to use sudo bluebuild build, haven't been able to get rootless podman working).

I thought I had ruled out these problems earlier, but I think the
"distrobox tips" setup steps need to be taken in the distrobox Earthfile setup you sent me earlier, since docker/podman is being used inside the distrobox.

Further reference for rootless podman:

I should have some time to look into this tomorrow.

I should have some time to look into this tomorrow.

He said, lying.

I think the fix for the userspace situation would look something like this:

diff --git a/Earthfile b/Earthfile
index 4476f48..3dba633 100644
--- a/Earthfile
+++ b/Earthfile
@@ -73,6 +73,9 @@ blue-build-cli:
 			podman \
 			skopeo
 
+	# Do podman, docker, and systemd changes in the fedora toolbox
+	# Or just podman I guess?
+
 	COPY +cosign/cosign /usr/bin/cosign
 
 	COPY (+install/bluebuild --BUILD_TARGET="x86_64-unknown-linux-gnu") /usr/bin/bluebuild
@@ -91,7 +94,24 @@ blue-build-cli-alpine:
 
 	BUILD +install --BUILD_TARGET="x86_64-unknown-linux-musl"
 
-	RUN apk update && apk add buildah podman skopeo fuse-overlayfs
+	# sample podman changes for alpine
+	# See https://distrobox.it/useful_tips/#using-podman-inside-a-distrobox
+	RUN apk update && apk add buildah podman skopeo fuse-overlayfs crun
+
+	# this doesn't actually make sense, $USER doesn't exist yet...
+	RUN usermod --add-subuids 10000-65536 $USER && usermod --add-subgids 10000-65536 $USER
+	RUN cat << EOF > /etc/containers/containers.conf
+		[containers]
+		netns="host"
+		userns="host"
+		ipcns="host"
+		utsns="host"
+		cgroupns="host"
+		log_driver = "k8s-file"
+		[engine]
+		cgroup_manager = "cgroupfs"
+		events_logger="file"
+		EOFRUN
 
 	COPY +cosign/cosign /usr/bin/cosign
 	COPY (+install/bluebuild --BUILD_TARGET="x86_64-unknown-linux-musl") /usr/bin/bluebuild

While I'd love to do try these changes out myself, I don't actually know what I'm doing in rust/earthly and I don't really have the time to figure it out just now -- maybe next time.

For now, I'm at least able to do rootful podman in an alpine distrobox and that's been helpful. Thanks again for the help @gmpinder

Or, more simply, do the host's install of podman/docker trick from https://distrobox.it/useful_tips/#using-hosts-podman-or-docker-inside-a-distrobox instead:

diff --git a/Earthfile b/Earthfile
index 4476f48..3dba633 100644
--- a/Earthfile
+++ b/Earthfile
@@ -91,7 +94,24 @@ blue-build-cli-alpine:
 
 	BUILD +install --BUILD_TARGET="x86_64-unknown-linux-musl"
 
	RUN apk update && apk add buildah podman skopeo fuse-overlayfs

+	RUN ln -s /usr/bin/distrobox-host-exec /usr/local/bin/podman
+       # maybe /usr/bin/podman instead b/c can't write to /usr/local 

 	COPY +cosign/cosign /usr/bin/cosign
 	COPY (+install/bluebuild --BUILD_TARGET="x86_64-unknown-linux-musl") /usr/bin/bluebuild

I ran docker context use default within the blue-build distrobox. Now, I'm getting:

Just a heads up, I removed the use of --builder default in #155 since we don't want to dictate what builder you use for buildx.