qemu,raw: Ubuntu 20.04 EFI builds are broken
johananl opened this issue · 22 comments
What steps did you take and what happened:
PACKER_LOG=1 make build-qemu-ubuntu-2004-efi
The build fails with the following error:
2022/05/11 13:21:06 packer-builder-qemu plugin: Qemu stderr: qemu-system-x86_64: -drive file=OVMF.fd,if=pflash,format=raw,readonly=on: Could not open 'OVMF.fd': No such file or directory
==> qemu: Error launching VM: Qemu failed to start. Please run with PACKER_LOG=1 to get more info.
The same is theoretically true for make build-raw-ubuntu-2004-efi
, however raw builds are currently broken due to #879.
Update: The issue reproduces for raw builds, too.
What did you expect to happen:
I expected the build to work.
Anything else you would like to add:
Looks like OVMF.fd
should be a firmware file used to emulate EFI in VMs. Ideally we should create (or download) this file automatically upon build. If we can't do that, we should document that it's required and probably exempt the EFI builds from make build-{qemu|raw}-all
.
Environment:
Project (Image Builder for Cluster API, kube-deploy/imagebuilder, konfigadm): image-builder
Additional info for Image Builder for Cluster API related issues:
- OS (e.g. from
/etc/os-release
, orcmd /c ver
): Ubuntu 20.04 LTS - Packer Version: v1.8.0
- Packer Provider: qemu
- Ansible Version: irrelevant
- Cluster-api version (if using): irrelevant
- Kubernetes version: (use
kubectl version
): irrelevant
/kind bug
cc @MaxRink
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
From my (limited) understanding, this requires some qemu(-kvm) tooling to be installed onto the machine you're running this on.
We managed to get around this at Giant Swarm by using the docker image running the following:
apt-get update && apt-get install -y qemu qemu-kvm
Maybe it makes sense to have this built in to the standard image-builder container image to at least make sure that is useable out-of-the-box with qemu.
I'm not sure how best to handle this when running make
directly other than stating it as a pre-requisite. Do you have any ideas?
Actually, looks like this is now added to the Docker image: 6b2a0ec
The OVMF.fd
error still reproduces for me @mboersma. IIUC #879 didn't affect this issue but rather the ability to try to reproduce this issue for raw builds (because you need to be able to run a raw build in order to determine if the EFI raw build is broken).
Thanks for the info @AverageMarcus. AFAICT you're talking about the QEMU/KVM dependency which is off course necessary, however I'm not sure how satisfying this dependency fixes the problem with the missing OVMF.fd
file.
This seems very relevant: https://github.com/tianocore/tianocore.github.io/wiki/How-to-run-OVMF
I've reproduced the problem using the Docker-based build, too:
docker run -it --rm --net=host -e PACKER_LOG=1 registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.17 build-qemu-ubuntu-2004-efi
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale