However, there is a strange issue: the HDDs are not leaving the idle_a power state anymore, neither automatically nor when using openSeaChest --device /dev/sda --transitionPower standby.
This problem does not appear when I comment out these lines, which leads me to believe that there is a conflict between mounting the pool using fileSystems and boot.zfs.extraPools. (Although I couldn’t find anything on that topic online… am I missing something?)
Is it possible to remotely unlock a zfs pool that is not using legacy mountpoints with SSH? Without the fileSystems block, the initrd just finishes without blocking and there is no SSH server running when the system asks for the password.
Additionally, I tried replacing this initrd systemd service with the following:
systemd.services.zfs-remote-unlock = {
description = "Prepare for ZFS remote unlock";
wantedBy = [ "initrd.target" ];
after = [ "systemd-networkd.service" ];
serviceConfig.Type = "oneshot";
path = with pkgs; [ zfs ];
script = ''
zfs load-key -a
'';
};
but that produces the error Jan 21 21:03:30 seidenschwanz zfs-remote-unlock-start[235]: /nix/store/zj5pydswy860ydzv29imhqf0mzcpxb91-unit-script-zfs-remote-unlock-start/bin/zfs-remote-unlock-start: line 4: zfs: command not found, which confuses me even more.
For the idle powerdown, it’s unlikely to be related to encryption. Is this just the first/only dataset mounted?
ZFS takes some work to get to be truly idle. In my case I have snapshots and other things going on all the time, so it’s a lost cause. But without anything like that, you will likely at least need to play with the atime and relatime properties.
Not so much a conflict, but a redundancy. If it’s mentioned in fileSystems then the pool dependency will be added to the imports automatically. If not, then extraPools is intended to append, well, extra pools to the import list, such as where you’re using zfs properties to mount datasets on import (or you’re mounting no datasets, because it’s a pure backup target for zfs recv).
What might be a conflict is having a fileSystems entry as well as zfs mountpoint properties (other than legacy/noauto). It’s possible there are two mechanisms both trying to mount in the same place, and repeated retries by the one that lost the race are keeping the disks active?
I have used the zfs pool for ~1 Year without encryption without any problems with HDDs spinning down. But after setting up encryption and remote unlocking, the HDDs are not powering down at all (and they produce a single clicking sound of a head movement every ~10 seconds) with zero IO according to iostat and zpool iostat.
atime and relatime are both disabled for the whole zpool.
That’s what I suspect. And moving to legacy mountpoints might solve my problem. I’ll try it tomorrow. But I find it strange that decrypting the zpool using SSH just doesn’t work when using only extraPools.
Using extraPools the pool will not be imported during stage 1, only much later during stage 2 boot. Anything you need for boot should be an explicit mount, and general bulk data can and will mount later.
Unless the root dataset of that pool is the one with encryption, the prompt will not come at import either, but (just a little) later again as the datasets mount, based on their properties.
Depending on details, these mounts may well come after the regular ssh service starts, so you can probably just use that - or you might need to add some systemd ordering clues, but otherwise nothing special
(Sorry, I didn’t read your message correctly. This reply is redundant.)
I would only need the pool to be imported during stage 1 because then an SSH server is running. Is it possible to have the regular SSH server running before the pool is imported during stage 2?
You could also just have the dataset with canmount=noauto, or not even listed in extraPools at all (assuming there’s nothing else unencrypted there).
Then nothing will happen until you ssh in and zpool import -l or zfs mount -l explicitly. Then any other service that you want to start once that’s mounted can depend on that, or have a ConditionPathExists for something inside the mounted dataset.
Or you hold at stage 1 for the prompt, if the machine shouldn’t boot without that data. That’s what I use for root and all other data.
I tried to do that. But even after settings canmount=noauto for all encrypted datasets (zpool/encrypted and its children), zfs-import-zpool.service still asked for a password.
That is what I unsuccessfully tried in my first post. Could you share how you do that (if you don’t use mountpoint=none and fileSystems)?
Anyways, I have given up on lettings zfs mount the datasets and switched to mountpoint=none. That works and, as it is not inferior in any way, I should have just done that in the first place.
If you’re using the declarative fileSystems list, those datasets should have mountpoint=legacy (or I guess none).
As for what i do; rpool has encryption on from the root dataset, so that gets prompted for at import in stage 1. Anything in the fileSystems list is a child of that so inherits keys and there’s no additional prompt, but that list is short and just what’s needed for boot:
I don’t have any datasets from secondary pools in these lists, but it would work the same for those if I did. I used to, when I used a separate bpool for grub-compatible zfs booting - oh, maybe I still have one machine like that actually.
Everything else (dozens or even hundreds of datasets) is mounted via zfs properties, later. Some of those are from secondary pools, listed in extraPools, and some of those vary on whether the pool root, or a lower dataset, is the encryption root, some also have luks that needs to be unlocked first.
Then I also have a dataset tree that’s not mounted at all, because it’s received raw sends as backups, and those also set mountpoint=legacy. Some pools have only this, so no prompts are needed at all, and I list just the pools I need prompted, e.g.
Thanks, I changed my setup to be similar to yours with mountpoint=legacy and with a similar use of fileSystems. I do wonder though if zfs.requestEncryptionCredentials could maybe help with my original issue… But that doesn’t matter. It works and I’m happy.