Mount dependency on ZFS dataset

I have a second NAS for my media storage that I expose to my first NAS over NFS, it’s always acted weird with the mount on reboot, I must be doing something wrong but when I ls /export/media on reboot is empty, because it is somehow bind mounting the zroot pool instead of what I want (the rust pool).

I have:

{
  # The HDD pool
  boot.zfs.extraPools = ["rust"];
  # Could this be adding to the issue? I do need it for NFS
  systemd.tmpfiles.settings."exports" = {
    "/export".d = {
      user = "nobody";
      group = "nogroup";
    };
  };

  fileSystems = {
    "/export/media" = {
      device = "/rust/media";
      options = ["bind"];
      fsType = "none";
    };
  };
}

This produces the wrong mounts on boot:

zroot/root on /export/media type zfs (rw,relatime,xattr,posixacl,casesensitive)
rust/media on /rust/media type zfs (rw,noatime,xattr,posixacl,casesensitive)

I have tried added depends but ended up not solving the problem and only making it mount rust/media twice.

What am I doing wrong, is this not how I’m supposed to bind mount, could zfs, systemd-mount and legacy mount be playing with me somehow?

ZFS’s non-legacy mountpoints don’t use standard systemd mount units, and instead have one zfs-mount.service unit that mounts them all. So the things you can do in systemd to say “I need this mountpoint to be mounted” don’t work with non-legacy ZFS mountpoints. There are ways to make ZFS do this more correctly, but I wouldn’t recommend them. Instead, just order the things that need them after zfs-mount.service.

  fileSystems = {
    "/export/media" = {
      device = "/rust/media";
      options = ["bind" "x-systemd.after=zfs-mount.service"];
      fsType = "none";
    };
  };
1 Like

That fixed it, thank you so much!