Looking for Guides: ZFS install with bpool and rpool on different disk sets

I’m looking for a guide (or guides) that cover nixos with multiple disk sets - in my case mirrored SSDs for boot (bpool) and swap, and multiple NVME drives for root (rpool).

I’ve tried modifying guides and scripts which assume bpool and rpool on the same disk set (the OpenZFS guide and others).

Whichever way I turn I get different errors which just highlights my understanding of the install process and configuration aspects, (and also bash). :blush:

I am determined to gain the knowledge in due course, but I could do with something to help me get my main machine up and running while I learn - so any help much appreciated!

1 Like


I’ve done such a setup a while ago. I have two SSDs and two RDDs (rotating disk drives). The SSDs will function as the boot drive, which will have a zboot pool and another zssd pool which can use the extra space. Then, there is a zroot pool that is going to be used by the RDDs.

Here is a gist of the script I used. Feel free to ask questions.

Basically this is a quick and dirty edit of my nixos-zfs-setup repository script.

Happy Nix’ing!

1 Like

I can confirm that @markvd’s script works and, additionally, uses a nice/sane naming scheme for pools! I also learned about (and opted into) impermanence - something I would never have considered, previously.

Happy user over here, thank you! I think the only change I made was a adding a reservation pool to keep disk usage under 80%.

1 Like

Hi @stites, thank you for your feedback! I’m glad to hear you’ve learned about impermanence, too. It’s a very useful method that fits well into NixOS.

Could you please elaborate on the reservation pool to keep disk usage under 80%? I’ve learned a lot about zfs quotas and reservation with this howto, so I’m curious to learn about your specific use case.

Many thanks - that was just the push I needed!I ended up using your “quick and dirty” version to help understand the other one which I used as the basis for my script. I made a couple of mistakes along the way (amazing how much trouble a typo can cause), but I’m happy with my version now.

I modified it a little - in particular I set the MBR to 2M follwing a warning that the partition size was too small (100K), and also used the swap approach from the OpenZFS guide.

Thanks again :grinning:

Glad it was helpful! Could you please detail the swap approach from the openzfs guide? What are benefits of using swap from inside a pool compared to a directly accessible swap partiton?

I am much less zfs-savvy, I just used the relevant section on the wiki for ZFS, and confirmed this to be the case, even for very large disks, via this serverfault answer. The code in the wiki creates a pool specifically for a 20% reservation which seems nice because you can easily delete this pool in a bind, copied below:

zfs create -o refreservation=10G -o mountpoint=none rpool/reserved

The article you linked seems a bit more involved, it’s probably better for multi-disk systems so that individual disks don’t hit capacity. I haven’t ever hit the 80% limit, though, and my past two deployments have just been single-disk ZFS installations for laptops (ZFS is just used for backups/snapshots). Sorry I can’t be more help!

1 Like

Heads up, if you are using swap with an eye towards hibernation, you may want to follow this open issue (https://github.com/openzfs/zfs/issues/12842) which indicates that you may encounter data corruption in various configurations.


My script did this…the swap is not inside a pool (for reasons as @stites referenced).

for d in ${BOOT_DISK[*]}
sgdisk --zap-all ${d}
sgdisk -a1 -n1:0:+2M -t1:EF02 -c 1:${PART_MBR}${i} ${d}
sgdisk -n2:0:+1G -t2:EF00 -c 2:${PART_EFI}${i} ${d}
sgdisk -n3:0:+4G -t3:BE00 -c 3:${PART_BOOT}${i} ${d}
sgdisk -n4:0:+${SWAPSIZE} -t4:8200 -c 4:${PART_SWAP}${i} ${d}
sgdisk -n5:0:+8G -t5:BF07 -c 5:${PART_LOG}${i} ${d}
sgdisk -n6:0:0 -t6:BF08 -c 6:${PART_CACHE}${i} ${d}
partprobe ${d}
sleep 2

cryptsetup open --type plain --key-file /dev/random $d-part4 ${d##*/}-part4
mkswap /dev/mapper/${d##*/}-part4
swapon /dev/mapper/${d##*/}-part4

mkswap -L ${PART_SWAP}fs${i} /dev/disk/by-partlabel/${PART_SWAP}${i}

swapon /dev/disk/by-partlabel/${PART_SWAP}${i}

(( i++ )) || true
unset i d

I have the swap designated for future options for hibernation - but I don’t actually use it (I used to omit swap on this machine previouslyt, but I have lmirrored 240Gb SATA disks for boot, so I might as well reserve some of the space for the future). I’m not sure this is the best approach either…but I wanted something. I don’t need it as I have 64Gb memory, and the machine tends to stay powered up. The log and cache are there for similar reasons, but also not currently used.

1 Like

Actually, NixOS disables hibernation by default when ZFS is in the configuration. Technically I think we can fix it with some changes to initrd, but that has not been done yet.