Encrypted datasets are not mounted automatically
Closed this issue ยท 25 comments
System information
Type | Version/Name |
---|---|
Distribution Name | Fedora Server |
Distribution Version | 30 |
Linux Kernel | 5.0.16-300.fc30.x86_64 |
Architecture | amd64 |
ZFS Version | 0.8.0-rc5 |
SPL Version | 0.8.0-rc5 |
Describe the problem you're observing
After some testing with encrypted datasets on Fedora 30, I found that encrypted datasets are not mounted automatically after reboot. That's OK if you use keylocation=prompt, but it is also happening if you use keylocation=file://.
It looks like the /usr/lib/systemd/system/zfs-mount.service file is missing the -l option to the mount command. I added the -l option to the mount command, and then my encrypted datasets are mounted automatically after reboot.
Describe how to reproduce the problem
Generate a raw key and a encrypted dataset:
# dd if=/dev/urandom of=/root/random.key bs=32 count=1
# zfs create -o encryption=on -o keylocation=file:///root/random.key -o keyformat=raw trunk/secure
# reboot
After reboot verify mounted datasets:
# zfs mount
trunk/secure is not mounted. But after running the command:
# zfs mount -l -a
It will be.
Include any warning/errors/backtraces from the system logs
The Arch Linux Wiki has a section on unlocking an encrypted dataset at boot time, but on CentOS this leads to dependency cycles: https://wiki.archlinux.org/index.php/ZFS#Unlock_at_boot_time
Basically, libstoragemgnt will fail to start, which causes all kinds of havoc including for systemd-logind.
I'm wondering whether it would be better to add the -l option to the zfs-mount.service file by default. This would only work if the dataset is encrypted with a key file.
Adding the "-l" option to zfs-mount.service does work for key files btw. My point is, if you use a password rather than a key file, the service seems to fail.
Let's see if we can get @aerusso, @Fabian-Gruenbichler, or @rlaager's thoughts on this so systemd can prompt for the password when required.
I'm not an expert on this, but I did some looking... We need to invoke systemd-ask-password
to prompt for passwords. Then, we can use zfs mount -a -l
safely.
Here is an example shell script that loads passwords using systemd-ask-password.
#!/bin/sh
# Set IFS to a newline:
IFS="
"
for dataset in $(zfs list -H -p -o name,encryptionroot | \
awk -F "\t" '{if ($1 == $2) { print $1 }}')
do
if [ "$(zfs get -H -p -o value keylocation "$dataset")" = "prompt" ] &&
[ "$(zfs get -H -p -o value keystatus "$dataset")" = "unavailable" ]
then
systemd-ask-password --id="zfs:$dataset" \
"Enter passphrase for '$dataset':" | \
zfs load-key "$dataset"
fi
done
I have tested this with an encryption root with spaces in it and it works.
I'm not sure if sh
is the best approach for this, or if we should have some small helper (or alternate mode for zfs load-key
) that does all this.
Edit: This assumes the password is entered correctly. A production-quality implementation really needs to prompt multiple (e.g. 3) times.
Also, this further assumes that you even have the passwords at all. That may not be a valid assumption at all. Imagine that everyone's home directory is separately encrypted with their login password. It's not going to be possible to mount those at boot. (One might argue that they should be canmount=noauto
in the case?) Another example might be datasets that were raw-sent to a backup server.
Perhaps the best approach would be to have a zfs mount
flag that, instead of or in addition to -l
, calls systemd-ask-password
to do the prompting for keylocation=prompt
datasets.
Edit 2: There is a separate question of how this all needs to work with zfs-mount-generator
.
Those dependency cycles are because the Arch instructions don't disable default dependencies and zfs-mount.service
does.
It seems to work fine for me if I disable default dependencies. I don't understand what the %j/%i
stuff is trying to do in the Arch instruction or why they call /usr/bin/bash
instead of running /sbin/zfs
directly. If anyone knows the intent there, it'd be nice to understand.
As root
, this should get a unit file that works fairly well with keys that don't require a prompt
:
cat << 'EOF' > /etc/systemd/system/zfs-load-key@.service
[Unit]
Description=Load ZFS keys
DefaultDependencies=no
Before=zfs-mount.service
After=zfs-import.target
Requires=zfs-import.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zfs load-key %I
[Install]
WantedBy=zfs-mount.service
EOF
The capital %I
is an un-escaped version of %i
. It converts -
to /
, so you can use one unit file for all your paths. For the dataset tank/enc
, you can use:
systemctl enable zfs-load-key@tank-enc
So far that's working for me with a file based passphrase:
tank/enc keylocation file:///root/.zfs-passphrase local
tank/enc keyformat passphrase -
tank/enc keystatus available -
IMHO the following approach would be a good idea:
- add a new helper or add a new switch to
zfs load-key
that will prompt viasystemd-ask-passphrase
instead of directly - add a new template unit
zfs-load-key@.service
, with instances for each encryptionroot, using the new helper/zfs load-key
to load the key - add a new
zfs-load-key-generator
, generating the instances ofzfs-load-key@.service
- adapt
zfs-mount-generator
to add appropriate dependencies onzfs-load-key@XXX.service
for each generated.mount
unit - adapt
zfs-mount.service
to use new helper/zfs load-key
with-a
inExecStartPre
, or alternatively - add another target in the mix that is reached after all generated
zfs-load-key@.service
instances have been started, and order the currently existingzfs-mount.service
after that target
Optionally, we might want to add ordering and/or cycle detection for keys stored on encrypted datasets which are needed by other encrypted datasets, although cycles are not that much of a problem since systemd will break them one way or another anyway, and if they exists the admin needs to fix them and not us ;)
I did not yet have time to play around with encryption, but I hope to get to it soon. If desired, I can attempt to at least whip up a PoC for the above in the next 1-2 weeks?
That seems like a well-thought-out, comprehensive solution.
We could significantly reduce the complexity of this process by deprecating zfs-mount.service
, using zfs-mount-generator
by default, and using a modified version of @ryanjaeb's template. We could include @rlaager's script as a fallback dependency of zfs-mount.service
.
I'm also assuming that we're doing this with a generator because we're concerned about calling zfs list
at boot time. If that's not the reason, then I don't understand the benefit. Also, I question whether or not a parallelized call to zfs list
during boot is that bad (c.f. the call to zfs list
at very early boot required for zfs-mount-generator
).
If we can guarantee that zfs-load-key-generator
will function reliably, we should be able to do the same with zfs-mount-generator
(i.e., require that zed is running).
IIRC, zfs-mount-generator
caches the information not for performance reasons but because generators need to run really early and the pools aren't necessarily imported at that time.
It was also argued that zfs list
could be dangerously slow. We also don't need information about encryption until much later in the boot process in this case, though.
if we want to support encrypted / datasets, we'd need to somehow get this information into the initrd though (for Debian-based distros, that means adapting the update-initramfs hook similar to how cryptsetup/LUKS is handled, not sure about dracut).
If the root filesystem dataset is encrypted, the zfs script in the initramfs prompts for the key. That already works today for initramfs-tools. (I assume it does for dracut too, but I don't use that.)
What I'm imagining (and have partially written up) is:
- Drop zfs-mount.service
.mount
units are produced at boot time by zfs-mount-generator, and, if encroot is set, depend onzfs-import-key-$encroot.service
.zfs-import-key-$encroot.service
units are also produced by zfs-mount-generator, and have aRequiresMountsFor
dependency on the (appropriately stripped) keylocation value (if it is a file).zfs-import-key-*.service
callszfs load-key
, orsystemd-ask-password | zfs load-key
, depending on the value of keylocation
So @Fabian-Gruenbichler, I think I do understand your comment about zfs-load-key-generator
but I think it would be better to wrap it all up into the same script.
I don't know if this is exactly the same issue, but when I manually run:
zfs load-key pool/dataset
I expect all the mountable datasets contained within, to be mounted, consistent with the behaviour of zpool import
.
The fact that it mounts nothing at all, is a bit surprising and frustrating, from a user's perspective.
I think both of those commands should have consistent mounting behaviour, if someone needs to not mount anything, maybe have a switch to stop that from happening.
I don't know if this is exactly the same issue, but when I manually run:
zfs load-key pool/dataset
I expect all the mountable datasets contained within, to be mounted, consistent with the behaviour of zpool import.
The current behavior of zpool import
is to not attempt to load any keys or mount any encrypted datasets. For this functionality, we have zpool import -l
. We did this so that people with scripts that run zpool import
won't break when an encrypted dataset prompts for a key. Similarly we have zfs mount -l
which works in much the same way.
We are still looking into the use-cases here and may revisit this decision in the future after the feature gets a bit more use.
I mean that zfs load-key
should mount all newly available datasets, as if they had been available from the moment of pool import by the zpool import
command.
I don't mean that it should do nothing, because zpool import
without a key would do nothing on those datasets.
I will try zfs mount -l
as a substitute for zfs load-key
. However I still think the default behaviour of loading a key, but not then mounting a mountable dataset, is inconsistent with the rest of zfsonlinux.
I assume it's possible to update the mounting options for a dataset, before the key is loaded. So the user already specified what they wanted, prior to loading the key.
- LUKS-encrypted /boot (which is bootable by Grub with a single prompt
for just this device)- lots of keyfiles for different volumes/datasets/partitions
- marking those needed in initrd for inclusion in /etc/crypttab
- automatic decryption in initrd to avoid entering lots of
passphrases on each boot
Is there any guide on how to make the above system to work? I'm trying to do the samething similar but apparently initramfs
is trying to mount my rpool
before /boot
and there is surely no key for rpoool
yet...
Any advice?
Maybe @Fabian-Gruenbichler
@alexsmartens it used to work, but it seems it recently broke - I haven't had time to investigate yet, but looking at changes to the ZFS initramfs scripts/hooks should probably shed some light.
Thanks @Fabian-Gruenbichler, changing /usr/share/initramfs-tools/scripts/zfs
solved my problem (#10360)
I was able to use LUKS+ext4 encryption which can be decrypted by SSH before start all systemd services using this excellent tutorial https://blog.iwakd.de/headless-luks-decryption-via-ssh
I want to do the same with ZFS native encryption (with passphrase).
Note that my root partition is not encrypted (nor uses ZFS).
My encryption is in another partition.
For LUKS the key was systemd-cryptsetup@name.service
with /etc/fstab
and /etc/crypttab
thing.
For ZFS as I understand, none of this are used.
So I could put a dependency for my decrypt service on the mount point, but starting the service does not make it ask for password as systemd-cryptsetup
does, so I have to use zfs mount -l
before. I want to avoid that and just start the decrypt.target
.
Is this possible?
I was able to use LUKS+ext4 encryption which can be decrypted by SSH before start all systemd services using this excellent tutorial https://blog.iwakd.de/headless-luks-decryption-via-ssh
I've achieved the same reading https://manpages.ubuntu.com/manpages/focal/man8/zfs-mount-generator.8.html
Why is this not mounted automatically? All the needed options are specified.
I don't use zed cache.
# zfs get all backup/enc backup|pcre2grep 'key|mount'
backup mounted no -
backup mountpoint /backup default
backup canmount off local
backup keylocation file:///root/important/zfs-backup.key.bin local
backup keyformat raw -
backup keystatus available -
backup/enc mounted yes -
backup/enc mountpoint /backup local
backup/enc canmount on default
backup/enc keylocation none default
backup/enc keyformat raw -
backup/enc keystatus available -
Dec 24 10:03:30 zed[7286]: eid=5 class=config_sync pool='tank'
Dec 24 10:03:30 zed[7289]: eid=10 class=config_sync pool='backup'
Dec 24 10:03:30 zed[7294]: eid=8 class=pool_import pool='backup'
Dec 24 10:03:30 zed[7284]: eid=7 class=config_sync pool='backup'
Dec 24 10:03:30 systemd[1]: Starting Import ZFS pools by device scanning...
Dec 24 10:03:30 zpool[7101]: no pools available to import
Dec 24 10:03:30 systemd[1]: Finished Import ZFS pools by device scanning.
Dec 24 10:03:30 systemd[1]: Reached target ZFS pool import target.
tank
is root pool encrypted with LUKS2 and unlocked at boot with passphrase (F35 dracut and ZFS 2.1.2).
So by 10:03:30 the key was available.