With disk encryption added in preseed-efi.cfg, build stuck at waiting for SSH
ygao-armada opened this issue · 4 comments
What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
Add full disk encryption related hange to preseed-efi.cfg as follows:
diff --git a/images/capi/packer/raw/linux/ubuntu/http/base/preseed-efi.cfg b/images/capi/packer/raw/linux/ubuntu/http/base/preseed-efi.cfg
index 14cb4008f..fca87df75 100644
--- a/images/capi/packer/raw/linux/ubuntu/http/base/preseed-efi.cfg
+++ b/images/capi/packer/raw/linux/ubuntu/http/base/preseed-efi.cfg
@@ -52,7 +52,12 @@ d-i partman-partitioning/default_label string gpt
d-i partman/choose_label string gpt
d-i partman/default_label string gpt
-d-i partman-auto/method string regular
+#d-i partman-auto/method string regular
+d-i partman-auto/method string crypto
+d-i partman-crypto/confirm boolean true
+d-i partman-crypto/method string luks
+d-i partman-crypto/passphrase password possible
+d-i partman-crypto/passphrase-again password possible
d-i partman-auto/choose_recipe select gpt-boot-root-swap
d-i partman-auto/expert_recipe string \
gpt-boot-root-swap :: \
@@ -78,6 +83,8 @@ d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
+d-i initramfs-tools/cryptroot-initramfs-tools/verbose boolean true
+
# Create the default user.
d-i passwd/user-fullname string builder
d-i passwd/username string builder
@@ -93,6 +100,9 @@ d-i grub-installer/with_other_os boolean true
d-i finish-install/reboot_in_progress note
d-i pkgsel/update-policy select none
+d-i debian-installer/add-kernel-opts string \
+ "cryptopts=target=root,source=/dev/sda3,luks"
Run command "make build-raw-ubuntu-2004-efi"
The build stuck at waiting for SSH:
==> qemu: Connecting to VM via VNC (127.0.0.1:5975)
==> qemu: Typing the boot commands over VNC...
qemu: Not using a NetBridge -- skipping StepWaitGuestAddress
==> qemu: Using SSH communicator to connect: 127.0.0.1
==> qemu: Waiting for SSH to become available...
What did you expect to happen:
Build succeeds and get output image
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
Project (Image Builder for Cluster API:
Additional info for Image Builder for Cluster API related issues:
- OS (e.g. from
/etc/os-release
, orcmd /c ver
): - Packer Version:
- Packer Provider:
- Ansible Version:
- Cluster-api version (if using):
- Kubernetes version: (use
kubectl version
):
/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.