Panic: Timed out waiting for partition
Closed this issue · 4 comments
zehkira commented
Setup [1/20]: label
Setup [2/20]: mkpart
Discarding device blocks: done
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: bbeacfd4-ce77-46e1-842d-b65dcc32a605
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
Setup [3/20]: mkpart
mkfs.fat 4.2 (2021-01-31)
Setup [4/20]: mkpart
Setup [5/20]: mkpart
Setup [6/20]: pvcreate
Setup [7/20]: pvcreate
Setup [8/20]: vgcreate
Setup [9/20]: vgcreate
Setup [10/20]: lvcreate
Setup [11/20]: lvm-format
Discarding device blocks: done
Creating filesystem with 131072 4k blocks and 32768 inodes
Filesystem UUID: 5e1574f5-a718-422a-ad97-b0726e8d60c7
Superblock backups stored on blocks:
32768, 98304
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
Setup [12/20]: lvcreate
Setup [13/20]: lvcreate
Setup [14/20]: make-thin-pool
Setup [15/20]: lvcreate-thin
Setup [16/20]: lvcreate-thin
Setup [17/20]: lvm-format
btrfs-progs v6.6.3
See https://btrfs.readthedocs.io for more information.
Performing full device TRIM /dev/vos-root/root-a (19.00GiB) ...
NOTE: several default settings have changed in version 5.15, please make sure
this does not affect your deployments:
- DUP for metadata (-m dup)
- enabled no-holes (-O no-holes)
- enabled free-space-tree (-R free-space-tree)
Label: (null)
UUID: e01b4387-8aca-49c8-ac09-914bfeae0b87
Node size: 16384
Sector size: 4096
Filesystem size: 19.00GiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 256.00MiB
System: DUP 8.00MiB
SSD detected: no
Zoned device: no
Incompat features: extref, skinny-metadata, no-holes, free-space-tree
Runtime features: free-space-tree
Checksum: crc32c
Number of devices: 1
Devices:
ID SIZE PATH
1 19.00GiB /dev/vos-root/root-a
Setup [18/20]: lvm-format
btrfs-progs v6.6.3
See https://btrfs.readthedocs.io for more information.
Performing full device TRIM /dev/vos-root/root-b (19.00GiB) ...
NOTE: several default settings have changed in version 5.15, please make sure
this does not affect your deployments:
- DUP for metadata (-m dup)
- enabled no-holes (-O no-holes)
- enabled free-space-tree (-R free-space-tree)
Label: (null)
UUID: 0e5787c9-7a2e-4bdc-8146-3170d98d541c
Node size: 16384
Sector size: 4096
Filesystem size: 19.00GiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 256.00MiB
System: DUP 8.00MiB
SSD detected: no
Zoned device: no
Incompat features: extref, skinny-metadata, no-holes, free-space-tree
Runtime features: free-space-tree
Checksum: crc32c
Number of devices: 1
Devices:
ID SIZE PATH
1 19.00GiB /dev/vos-root/root-b
Setup [19/20]: lvcreate
Setup [20/20]: lvm-luks-format
panic: Timed out waiting for partition /dev/vos-var/var
goroutine 1 [running]:
github.com/vanilla-os/albius/core/disk.(*Partition).WaitUntilAvailable(0xc0002200e0)
github.com/vanilla-os/albius/core/disk/partition.go:343 +0xfc
github.com/vanilla-os/albius/core.runSetupOperation({0xc00041ac28?, 0xc000058038?}, {0xc00041ac40, 0xf}, {0xc00042d180, 0x4, 0x3?})
github.com/vanilla-os/albius/core/recipe.go:598 +0x1b48
github.com/vanilla-os/albius/core.(*Recipe).RunSetup(0xc0000ab5e0)
github.com/vanilla-os/albius/core/recipe.go:629 +0x195
main.main()
github.com/vanilla-os/albius/albius.go:20 +0x96
Trying to install in a 30GB VM with encryption enabled and automatic partitions.
VanillaOS-2-testing.20240208
zehkira commented
I tried to install again, this time without encryption and got the same error as Vanilla-OS/vanilla-installer#370.
zehkira commented
Just realized that what I have is not the latest build. Seems I was misled by the website, which lists 117 as the latest despite the existence of 118:
It appears that this issue might have been fixed already in 90a35bb.
mirkobrombin commented
I just updated the website
taukakao commented
Closing this, if it happens on the newest build, please reopen it