I have a RPi4 with a dual USB3 caddy with a SSD (for boot) and 4TB backup drive (data only).
I’ve been able to get the Pi to boot off of the SSD (ext4) and can create a zfs pool on the data drive and it all works great (using the 2021.05 image with a recent Pi board with current firmware). The issue I’m having is the zfs pool won’t import on reboot. I’ve been reading ZFS - NixOS Wiki very closely but unable to make things work. Here’s my basic script for how I’m trying to create the ZFS filesystem:
# Find data disk:
lsblk
ls /dev/disk/by_id
DISK=/dev/disk/by-id/ata-HGST_HDN726040ALE614_K7GX5ZBL
sgdisk --zap-all $DISK
wipefs --all $DISK
# Format data disk
fdisk $DISK
# g # create gpt partition
# n 1 +64G
# n 2 +64G
# n 3
# w
# Create zfs filesystem
DISK=/dev/sdb3
zpool create -O mountpoint=none -O atime=off -O compression=lz4 -O xattr=sa -O acltype=posixacl -o ashift=12 -R /mnt zpool $DISK
zfs create -o mountpoint=legacy zpool/tank
mkdir /mnt/tank
mount -t zfs zpool/tank /mnt/tank
nixos-generate-config
nixos-rebuild switch
zpool export zpool # or not
reboot
Then on boot I get messages like:
[FAILED] Failed to start Import ZFS pool "zpool"
systemctl status zfs-import-zpool.service says can't import 'zpool': no such pool available.
So the pool is missing. If I do zpool import -f the state is “FAULTED corrupted data”. But I just created a new blank zpool so why would it be corrupted. Perhaps my wiping didn’t really put the disk in a fresh state? Or maybe I don’t have something configured in .nix? The hardware-configuration.nix does have an entry for “/mnt/tank” and I am setting “networking.hostId” too.
I’ve tried many things including creating a mountpoint=legacy mount for zpool but that didn’t help.
I didn’t have devNodes set, thanks. I added it and it still doesn’t mount on reboot. I’m not sure if I’m doing it right. What I’m doing is:
# wipe disk and create 3 partitions (as above)
DISK=/dev/sdb3
zpool create -O mountpoint=none -O atime=off -O compression=lz4 -O xattr=sa -O acltype=posixacl -o ashift=12 -R /mnt zpool $DISK
zpool status
pool: zpool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errors
zfs create -o mountpoint=legacy zpool/tank
mount -t zfs zpool/tank /mnt/tank
nixos-generate-config
nixos-rebuild switch
# not sure if I need: zpool export zpool
reboot
Is “by-label” here “sdb3”. My lsblk is:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 29.5G 0 disk
├─sda1 8:1 0 30M 0 part
└─sda2 8:2 0 29.5G 0 part /
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 64G 0 part
├─sdb2 8:18 0 64G 0 part
└─sdb3 8:19 0 3.5T 0 part # this is the zfs data partition
sdc 8:32 1 14.9G 0 disk
└─sdc1 8:33 1 14.9G 0 part /thumbdrive
I also tried setting DISK= to the by-id value and changed to boot.zfs.devNodes = "/dev/disk/by-id"; but that didn’t fix it either.
For networking.hostId I double checked and it is the same value as head -c 8 /etc/machine-id.