Flawless.
No
Yes, I have never had a zfs-related issues.
Yes
A little over 4 years.
Flawless.
No
Yes, I have never had a zfs-related issues.
Yes
A little over 4 years.
I think that a zfs root install using flakes is more complex than it needs to be, or more long-winded depending how you approach it. Also the suggested config in the ZFS root documentation had a complicated example config (which is no longer there). For convenience, I’d really like the GUI to be able to handle ZFS for wider adoption and another killer feature of NixOS.
But to confirm, the process is not overly difficult.
Maybe slightly offtopic, but what are the benefits of ZFS vs btrfs on NixOS?
Seems like btrfs would eventually eat a little bit less RAM and might be a bit simpler?
There are probably more than this but off the top of my head:
This is definitely true. Although, the amount of RAM ZFS uses is also tunable.
Btrfs is much simpler and can be used as any other filesystem without much knowledge. ZFS requires you to learn about it and configure it for optimal performance with your specific use case.
Ultimately, it depends what you features and functionality you care about. ZFS and btrfs both have use cases.
You sure, that this is still the case? I know btrfs was not very reliable.
But I think they fixed that years ago. I have had zero issues in 2 years.
Thanks for all your insight
There’s a few things to be aware of, and most of this is unrelated to NixOS and applies to any distro with ZFS.
zfs send --raw
, and they usually just crash the system rather than causing data loss. But there’s still a small chance that native encryption will just kill a dataset, or even a dataset that you replicate to even without --raw
. Most people report no problems for long periods of time, but the bug list is long and largely unmitigated.mountpoint=legacy
and let nixos-generate-config
put your file systems in hardware-configuration.nix
like you would with any other file system. The long answer is that you can use non-legacy mountpoints for anything that you don’t list in hardware-configuration.nix
except that the ones needed for boot (/
, /nix
, and a couple others) do need to be in hardware-configuration.nix
with the FS option zfsutil
. But again, if you just use legacy, you don’t even have to think about it.All that said, I use ZFS pretty much exclusively on all my systems, even the SD card on my rpi-based router. It’s been super reliable. I even use native encryption on a couple of them (knock on wood). But I also know what I’m doing to some degree.
Yes. Quite positive.
I am a lot of Linux forums and I see people with corrupt btrfs filesystems regularly. To be clear, it is probably a very small percentage overall but it is definitely more prevalent than with any other common filesystem.
Don’t take my comment the wrong way. I said that ZFS is “more resilient”.
It is a relative statement. That doesn’t mean that btrfs is unreliable or unusable.
Hey thanks a lot for the help and replies.
I’m going to try doing this in a VM and then in a few days if everything works out, I’ll try it out on real hardware.
Native encryption is iffy. It has a laundry list of known bugs that have been unfixed for years now.
Oh snap. I didn’t know that. Then I can’t just reimport.
I guess this is unrelated to ZFS, but is there a way to have my setup with LUKS? With both drives encrypted, but I only have to decrypt one and the other is decrypted automatically. Like I said, I made my storage zpool be encrypted with a keyfile that was in my boot zpool.
Don’t take that the wrong way. ZFS encryption isn’t any different on Nixos than any other distro. If it is working fine for you now, it isn’t likely to be any different in NixOS.
You should be able to re-use the existing pool if you want to.
It’s working fine, but I really don’t want to take any risks with my data…
I’m also unsure if I can reimport anyway, my Void install has ZFS 2.2.3 and Nixpkgs last time I checked had 2.2.2…
This should be put as a disclaimer to the Wiki, mirrors exactly everything I’ve learned over the time about encrypted zfs (and the other aspects as well). I’ve now switched back to LUKS + ZFS on some devices, LUKS is still the best block storage encryption system, unless you want to send encrypted pools to remote pools there is no massive benefit of using zfs native encryption (expect simplicity).
One last thing to add, ZFS on root with nixos-unstable is sometimes a bit sad concerning Kernel upgrades, which makes a difference if you are using hardware that still gets a lot of Kernel improvements.
You can use NixOS unstable for the time being which is already on ZFS 2.2.3
. I’ve been exclusively using NixOS unstable on my personal hardware for several years now and it works quite well for a rolling-release cycle.
Generally, ZFS will ask for permission when upgrading the zpool to a newer version, so switching from 2.2.3
to 2.2.2
shouldn’t be an issue. I haven’t run zfs upgrade
since I’ve created this pool and it reports no pending upgrades, so I would expect it being compatible with 2.2.2
.
zfs upgrade
This system is currently running ZFS filesystem version 5.
All filesystems are formatted with the current version.
The guide for Nixos ZFS root uses LUKS so that may help if you want that - never used it myself and with encryption everywhere I’ve had no issues.
The ZFS doc for NixOS is just trash and unhelpful and upstream is unwilling to understand the problem about it or change it.
NixOS has first class support for ZFS but only for the “official” kernel. If you want to use other kernels (available in nixpkgs) like -xanmod, -rt, etc., you may end up with in an unbuildable state. This often happens when the “other” kernel receives an update that breaks ZFS and upstream ZFS (and ZFS packages) didn’t catch up with it yet. Fair to say though I’m on -unstable
, ymmv if you’re on stable.
Other than that, the mountpoints are a bit tricky but so far I found that only a handful of mountpoints need to be statically declared (mountpoint=legacy
and mounts set in hardware-config.nix
). Specifically, in my case it’s these:
/
/var
/var/lib/docker
/nix
/nix/store
I’ve got about 10 other datasets mounted via the mountpoint
prop (things like stuff in /home
, some backups, etc.)
I’ve been using this setup for years now without issues. I think it might be possible to reduce the static list even further but I didn’t try (yet). Not even sure why I have /nix
and /nix/store
on separate datasets come to think of it…
As to encryption, I’m using native ZFS. No problems so far.
And btw, imho you don’t necessarily need to trash zroot
pool, you can just create new datasets for NixOS usage.
Hey everyone, I’ve been experimenting with NixOS on a VM. I didn’t have any issues with the installation, but I’m having a really hard time trying to get a secondary storage drive to be decrypted and mounted automatically.
I’m using LUKS instead of ZFS native encryption which I guess is the issue.
I tried to do the same trick I did in my current install. I formatted the second drive with a keyfile instead of a keyphrase. I’ve been trying to get it to work with /etc/crypttab
like this:
environment.etc."crypttab" = {
enable = true;
text = ''
storage /dev/disk/by-uuid/<the uuid> /root/keyfile luks
'';
};
# then
fileSystems."/home/user/Storage" = {
device = "zstorage";
fsType = "zfs";
};
No luck though. Rebuilding this configuration crashed the entire system and put me in systemd’s emergency mode. journalctl -xe
just says "timed out waiting for /dev/disk/by-uuid/<the uuid>
, didn’t find any other useful info.
Trying to boot the configuration gets to stage 2, then hangs with two start jobs, one called Import ZFS pool "zstorage"
and the other is just called /dev/disk/by-uuid/<the uuid>
. After one times out, I’m in emergency mode again.
I’d appreciate any hints or clues. Thanks!
I managed to solve the issue. Ended up doing something much simpler.
I just encrypted both drives with the same passphrase, and set boot.initrd.luks.reusePassphrases
to true.
boot.initrd.luks.devices = {
"root" = {
device = "/dev/disk/by-uuid/<uuid1>";
allowDiscards = true;
};
"storage" = {
device = "/dev/disk/by-uuid/<uuid2>";
allowDiscards = true;
};
};
boot.initrd.luks.reusePassphrases = true;
That made it work flawlessly.
Thanks to everyone in this thread for the help!
I am very curious why your initial solution with crypttab
didn’t work. It seems to indicate the disk wasn’t present. Are you sure there wasn’t a typo in the UUID or something? That’s the only thing I can think of that would cause that…
I don’t know. I copy pasted the UUID from blkid
, so it shouldn’t be that.
If the UUID was wrong, wouldn’t it error out immediately? Why would it wait forever? That doesn’t make much sense I think.