Zfs with disko, faluire to import zfs pool

So my final goal was to create a NAS. with nixos installed on SSD and then 3 hdd drives for the zfs pool. I have littler experiences with zfs but after playing with it a bit on ubuntu I got some rudimentary understanding how it works.

So my initial idea was to install the OS on a fast SSD, which can be much smaller then the 3 HDD. drives and so I thought to install it on ext4 file system and create zfs pool for the other 3 HDD which will be used to store data with some redundancy.

Right now I am using disko for all of my machines, and what I like about disko is that it can be used during installation with nixosanywhere to format the hard drives and then to configure them, so no need to configure mount points with flileSystems.

for example this is a config I use for my VM single drive vda

{
  disko.devices = {
    disk = {
      vda = {
        device = "/dev/vda";
        type = "disk";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              type = "EF00";
              size = "500M";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            };
            root = {
              size = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
    };
  };
}

So based on this understanding I tried the example provided by disko for zfs with ext4

# systemd will mount an ext4 filesystem at / and zfs will mount the dataset underneath it
{
  disko.devices = {
    disk = {
      disk1 = {
        type = "disk";
        device = "/dev/vda";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              size = "500M";
              type = "EF00";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
                mountOptions = [ "umask=0077" ];
              };
            };
            primary = {
              size = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
      disk2 = {
        type = "disk";
        device = "/dev/vdb";
        content = {
          type = "zfs";
          pool = "zroot";
        };
      };
    };
    zpool = {
      zroot = {
        type = "zpool";
        datasets = {
          "root" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "root/zfs_fs" = {
            type = "zfs_fs";
            mountpoint = "/zfs_fs";
            options."com.sun:auto-snapshot" = "true";
          };
        };
      };
    };
  };
}

Which was exactly what I am aiming for(SSD for os, and HDD for zfs pool).
Installation with nixosanywhere goes just fine but system fails to boot, it gets stuck on importing zfs pool(zppol).

I also tried to go full zfs, I took disko example and trimmed it to bare bones.

{
  disko.devices = {
    disk = {
      x = {
        type = "disk";
        device = "/dev/vda";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              size = "64M";
              type = "EF00";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
                mountOptions = [ "umask=0077" ];
              };
            };
            zfs = {
              size = "100%";
              content = {
                type = "zfs";
                pool = "zroot";
              };
            };
          };
        };
      };
      y = {
        type = "disk";
        device = "/dev/vdb";
        content = {
          type = "gpt";
          partitions = {
            zfs = {
              size = "100%";
              content = {
                type = "zfs";
                pool = "zroot";
              };
            };
          };
        };
      };
    };
    zpool = {
      zroot = {
        type = "zpool";
        mode = "mirror";
        # Workaround: cannot import 'zroot': I/O error in disko tests
        options.cachefile = "none";
        rootFsOptions = {
          compression = "zstd";
          "com.sun:auto-snapshot" = "false";
        };
        mountpoint = "/";
        postCreateHook = "zfs list -t snapshot -H -o name | grep -E '^zroot@blank$' || zfs snapshot zroot@blank";

        datasets = {
          zfs_fs = {
            type = "zfs_fs";
            mountpoint = "/zfs_fs";
            options."com.sun:auto-snapshot" = "true";
          };
          };
        };
      };
    };
}

But nothing works, At worst I get stuck at the failure to import with the option to press Enter and go into emergency mode(which fails) and at best I was able to boot into the OS after zpool failure but the pool was not mounted(was able to mount it manually).

A few times I got to the point where it looks like the import worked but importing the data sets failed.

I did some reading and it is my understanding that systemd and zfs might go into race condition so I also tried to use legacy mode for zfs but it did not work for me.

As I mentioned before, disko should handle all the mount config, so I started with no additional boot configs other then what I had with ext4

{
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;
}

then tried to add

  boot.supportedFilesystems = [ "zfs" ];
  #boot.zfs.enabled = true;
  boot.zfs.extraPools = [ "zroot" ]; # Ensure NixOS imports this pool at boot
  networking.hostId = "3F8C6B19"; # Replace with your unique host ID

  services.zfs.autoSnapshot.enable = true; # Enables automatic snapshots
  services.zfs.trim.enable = true; # Enables TRIM for SSDs

with no success.

I am looking for an input, or better yet a working example.
Also feel free to comment on my NAS design.

I’ve used this disko config for zfs on root and had no boot issues

1 Like

I have used your config to the letter and I am still getting the same issue.

I am getting error on start.
keep in mind that disko enables support for zfs and replaces fileSystem settings.

I am getting 
starting device mapper and LVM
file descroptor 8 (/dev/console) leaked on lvm location. parent PID 1:  /nix/store(hash)-extrea-utils/bin/ash
importing root ZFS pool name-of-my-pool.........(bunch of dots)
then after some time I am getting 
cannot import name-of-my-pool: no such pool available 

mounting name-of-my-pool/safe/root on / ... 
mount: mounting name-of-my-poo/safer/root on /mnt-root/ faild: No such file or directory.

This repeats multiple times.

I have looked at your hardware-configuration.nix and configuration.nix and I don"t see there something special. I did found some zfs related configs

  boot.initrd.supportedFilesystems = [ "zfs" ];
  boot.supportedFilesystems = [ "zfs" ];

and added them to my config and I see no chagne.

If you’re using a VM, you might need to set boot.zfs.devNodes = "/dev/disk/by-uuid";, as the default (by-id) for some reason doesn’t always seem to work for virtual disks.

4 Likes

@ElvishJerricco I could kiss you :slight_smile: the issue was fixed by explicitly setting.

  boot = {
  zfs = {
      devNodes = "/dev/disk/by-uuid";
    };
  };

Fun fact after my VM finished booting /dev/disk/by-id only points to the CDROM while /dev/disk/by-uuid has my vda1/vda2

So this is far from being my final NAS configuration(might add it once I am done) which will be Based on (TrueNAS scheme), but a good intermediate step that can be used as a full working reference, and to make this thread more helpful in the future. This config was only tested on qemu-kvm VM and not on a real hardware.

The disko configuration is based on the link provided by @misuzu

# uefi only which means that the VM needs to be set with uefi boot
{
  disko.devices = {
    disk = {
      first = {
        type = "disk";
        device = "/dev/vda";
        content = {
          type = "gpt";
          partitions = {
            esp = {
              type = "EF00";
              size = "1G";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
                mountOptions = [
                  "defaults"
                ];
              };
            };
            root = {
              size = "100%";
              content = {
                type = "zfs";
                pool = "rpool";
              };
            };
          };
        };
      };
    };
    zpool = {
      rpool = {
        type = "zpool";
        options = {
          ashift = "12";
        };
        rootFsOptions = {
          mountpoint = "none";
          acltype = "posixacl";
          xattr = "sa";
          atime = "off";
          # encryption = "on";
          # keyformat = "passphrase";
        };
        datasets = {
          "safe" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "safe/root" = {
            type = "zfs_fs";
            options.mountpoint = "legacy";
            mountpoint = "/";
          };
          "local" = {
            type = "zfs_fs";
            options.mountpoint = "none";
          };
          "local/reserved" = {
            type = "zfs_fs";
            options.mountpoint = "none";
            options.refreservation = "2G";
          };
          "local/nix" = {
            type = "zfs_fs";
            options.mountpoint = "legacy";
            options.compression = "lz4";
            mountpoint = "/nix";
          };
          "local/tmp" = {
            type = "zfs_fs";
            options.mountpoint = "legacy";
            mountpoint = "/tmp";
          };
        };
      };
    };
  };
}

I can also confirm that there was no need to fiddle with fileSystems, E.g:

  fileSystems."/" =
    { device = "some disk path";
      fsType = "zfs";
    };

Everything is handled by disko.

The only thing I added to my boot configurations was:

  boot = {
  zfs = {
      devNodes = "/dev/disk/by-uuid";
    };
  };

Not tested if this is needed on actual hardware. Other then that I have a very minimal boot config:

{
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;
  boot.initrd.availableKernelModules = [
    "ahci"
    "xhci_pci"
    "virtio_pci"
    "sr_mod"
    "virtio_blk"
  ];
  boot.initrd.kernelModules = [ ];
  boot.kernelModules = [ "kvm-intel" ];
  boot.extraModulePackages = [ ];

}

which was generated for me when I first installed NixOS on the VM.

zfs and mounting, as you can see all mount points are set to legacy:

 options.mountpoint = "legacy";

This means that systemd is the one to handle the mounting and not zfs(based on /etc/fstab). I did not try to use zfs to directly handle the mount points although from disko examples this option exists. This might or might not be more complected. There is a potential for race conditions between zfs and systemd mounting.

This config is for uefi only, I am not sure if it will work with bios(after you replace the gpt cofnig), or if using zfs with bios is even a good idea.

1 Like

So even when using qemu-kvm VM you don"t need

  boot = {
  zfs = {
      devNodes = "/dev/disk/by-uuid";
    };
  };

all you need to do is to set Serial for every disk

<disk type="file" device="disk">
  <driver name="qemu" type="qcow2" discard="unmap"/>
  <source file="/var/lib/libvirt/images/nixos-unstable-6-clone.qcow2" index="3"/>
  <backingStore/>
  <target dev="vdc" bus="virtio"/>
  <serial>third</serial>
  <alias name="virtio-disk2"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</disk>

It is not set by default. and once you do you will see in your pool status

zpool status
  pool: boot-pool
 state: ONLINE
config:

	NAME                     STATE     READ WRITE CKSUM
	boot-pool                ONLINE       0     0     0
	  mirror-0               ONLINE       0     0     0
	    virtio-first-part2   ONLINE       0     0     0
	    virtio-second-part1  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
config:

	NAME                     STATE     READ WRITE CKSUM
	tank                     ONLINE       0     0     0
	  raidz1-0               ONLINE       0     0     0
	    virtio-third-part1   ONLINE       0     0     0
	    virtio-fourth-part1  ONLINE       0     0     0
	    virtio-fifth-part1   ONLINE       0     0     0
1 Like