Zfs install legacy or not?

The zfs install documentation mentioned a non legacy way, the recent version changed to use legacy.
I’ve used legacy before and started converting my script to use the ‘‘new way’’. Is legacy currently a best practice? Why?

https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS/2-system-installation.html

1 Like

That guide is a little odd because it has mountpoint=/ and canmount=off on the root dataset, but the child it creates under it has mountpoint=none, and then the children of that have mountpoint=legacy. It’s just odd to have the root dataset with mountpoint=/ and canmount=off in that case, since it doesn’t inform the mountpoints of any children and is never mounted itself.

Anyway, as for legacy or not, it doesn’t really matter. But you should treat them a little differently. With legacy mountpoints you can just do things the way you normally would with other file systems; i.e. you can just mount them and then use nixos-generate-config [--show-hardware-config] to have your file systems configured.

But if you use non-legacy mountpoints, you should not include fileSystems entries in your hardware-configuration.nix for them except for the ones that are required for initrd and those should have options = ["zfsutil"];. The reason for all that is that ZFS will handle mounting stuff for you, while NixOS will try to have systemd manage its fileSystems. It usually doesn’t break if you do things wrong here, but they will butt heads and I’ve seen very obscure behavior result from doing this wrong before. For reference, the FSes needed for initrd are described by the fsNeededForBoot function.

EDIT: For reference, here’s a NixOS test to show the best case scenario if you configure fileSystems for non-legacy, non-initrd mountpoints:

machine # [    2.186957] systemd[1]: Starting zfs-import-foo.service...
machine # [    2.490869] zfs-import-foo-start[625]: importing ZFS pool "foo"...Successfully imported foo
machine # [    2.494618] systemd[1]: Finished zfs-import-foo.service.
machine # [    2.494873] systemd[1]: Reached target zfs-import.target.
machine # [    2.497590] systemd[1]: Mounting foo.mount...
machine # [    2.498170] systemd[1]: Starting zfs-mount.service...
machine # [    2.508042] mount[784]: filesystem 'foo' cannot be mounted using 'mount'.
machine # [    2.508363] mount[784]: Use 'zfs set mountpoint=legacy' or 'zfs mount foo'.
machine # [    2.508651] mount[784]: See zfs(8) for more information.
machine # [    2.510475] systemd[1]: foo.mount: Mount process exited, code=exited, status=1/FAILURE
machine # [    2.510838] systemd[1]: Mounted foo.mount.
machine # [    2.514441] systemd[1]: Finished zfs-mount.service.

The boot succeeds and the FS is mounted. But the point is you should just pick one. I’ve seen bizarre behavior caused by a file system having both non-legacy mountpoint and an entry in hardware-configuration.nix. For instance, this version of the test fails, because it simulates the case where you’ve moved your ZFS mountpoint but not updated your hardware-configuration.nix. That should have worked, but by having a fileSystems entry, it’s now a hardware detail of the NixOS generation.

8 Likes

FWIW, I have this setup for rpool/ROOT:

❯ zfs list -o space,canmount,mountpoint -r rpool
NAME                 AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  CANMOUNT  MOUNTPOINT
rpool                 195G  23.8G        0B    192K             0B      23.8G  off       none
rpool/ROOT            195G  4.78G        0B    192K             0B      4.78G  off       /var/lib/containers
rpool/ROOT/ilus       195G  4.78G     3.34G   1.44G             0B         0B  on        legacy
rpool/ROOT/ns         195G   192K        0B    192K             0B         0B  on        /var/lib/containers/ns
…

basically, “root filesystems” for the host, and any containers, all live as peers here, by hostname. I use legacy for the host root, overriding the default for other containers. Sometimes there are other overrides (if different container tools are in use, though I mostly use nixos-containers)

The main reason for this is that the same zfs snapshot/replication policy applies to all of them.

I don’t use mountpoint=legacy for any data filesystems, only root and the nix store (and /tmp if it’s zfs rather than tmpfs).