Cannot import ZFS pool at boot


After being pleased by NixOS and losing some files because I rm a bit too quickly, I would like to install NixOS with ZFS and its automatic snapshots.

In my case, I want to install NixOS in a virtual machine running on a Windows 10 Pro host with VMWare Workstation Pro 15.

I then followed the guide on the wiki and also tried another guide found online.

I can successfully install NixOS. However, I cannot boot into it because the stage 1 does not succeed: cannot import 'rpool': no such pool available. Here is a screenshot of the console.

In hardware-configuration.nix, fileSystems."/".device refers to rpool. However, how is NixOS suppose to know on which partition is the rpool pool which is /dev/sda1 in my case? I guess that is why it is not found.

Did anyone encounter this error before? Any help would be appreciated.

I had some issues auto importing ZFS volumes (however they were not root ones). To sort this out, it might be a good idea to check the points mentioned here: ZFS - NixOS Wiki

More precisely :

  • Is your ZFS moutpoint set to legacy ?
  • Did you add boot.supportedFilesystems = [ "zfs" ]; to your configuration.nix ?

Hope it can help !

I followed precisely the How to install NixOS on a ZFS root filesystem guide.

I ran the following commands:

zfs create -o mountpoint=legacy rpool/root/nixos
zfs create -o mountpoint=legacy rpool/home

and added boot.supportedFilesystems = [ "zfs" ]; to configuration.nix.

The only thing I did not follow in the guide was referring to the disk with /dev/disk/by-id/…. Instead I referred directly to /dev/sda because the only file in /dev/disk/by-id/ was ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 which is the virtual CD containing the NixOS ISO used to make the install.

Here is my full hardware-configuration.nix:

# Do not modify this file!  It was generated by ‘nixos-generate-config’
# and may be overwritten by future invocations.  Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, ... }:

  imports = [ ];

  boot.initrd.availableKernelModules = [ "ata_piix" "mptspi" "uhci_hcd" "ehci_pci" "xhci_pci" "sd_mod" "sr_mod" ];
  boot.initrd.kernelModules = [ ];
  boot.kernelModules = [ ];
  boot.extraModulePackages = [ ];

  fileSystems."/" =
    { device = "rpool/root/nixos";
      fsType = "zfs";

  fileSystems."/home" =
    { device = "rpool/home";
      fsType = "zfs";

  fileSystems."/boot" =
    { device = "/dev/disk/by-uuid/2C1A-9868";
      fsType = "vfat";

  swapDevices = [ ];

  nix.maxJobs = lib.mkDefault 6;

and my configuration.nix:

# Edit this configuration file to define what should be installed on
# your system.  Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).

{ config, pkgs, ... }:

  imports =
    [ # Include the results of the hardware scan.

  boot.supportedFilesystems = [ "zfs" ];

  # Use the systemd-boot EFI boot loader.
  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;

  networking.hostId = "8453be09";
  # networking.hostName = "nixos"; # Define your hostname.
  # networking.wireless.enable = true;  # Enables wireless support via wpa_supplicant.

  # The global useDHCP flag is deprecated, therefore explicitly set to false here.
  # Per-interface useDHCP will be mandatory in the future, so this generated config
  # replicates the default behaviour.
  networking.useDHCP = false;
  networking.interfaces.ens33.useDHCP = true;

  # Configure network proxy if necessary
  # networking.proxy.default = "http://user:password@proxy:port/";
  # networking.proxy.noProxy = ",localhost,internal.domain";

  # Select internationalisation properties.
  # i18n = {
  #   consoleFont = "Lat2-Terminus16";
  #   consoleKeyMap = "us";
  #   defaultLocale = "en_US.UTF-8";
  # };

  # Set your time zone.
  # time.timeZone = "Europe/Amsterdam";

  # List packages installed in system profile. To search, run:
  # $ nix search wget
  # environment.systemPackages = with pkgs; [
  #   wget vim
  # ];

  # Some programs need SUID wrappers, can be configured further or are
  # started in user sessions.
  # = true;
  # programs.gnupg.agent = { enable = true; enableSSHSupport = true; };

  # List services that you want to enable:

  # Enable the OpenSSH daemon.
  # services.openssh.enable = true;

  # Open ports in the firewall.
  # networking.firewall.allowedTCPPorts = [ ... ];
  # networking.firewall.allowedUDPPorts = [ ... ];
  # Or disable the firewall altogether.
  # networking.firewall.enable = false;

  # Enable CUPS to print documents.
  # services.printing.enable = true;

  # Enable sound.
  # sound.enable = true;
  # hardware.pulseaudio.enable = true;

  # Enable the X11 windowing system.
  # services.xserver.enable = true;
  # services.xserver.layout = "us";
  # services.xserver.xkbOptions = "eurosign:e";

  # Enable touchpad support.
  # services.xserver.libinput.enable = true;

  # Enable the KDE Desktop Environment.
  # services.xserver.displayManager.sddm.enable = true;
  # services.xserver.desktopManager.plasma5.enable = true;

  # Define a user account. Don't forget to set a password with ‘passwd’.
  # users.users.jane = {
  #   isNormalUser = true;
  #   extraGroups = [ "wheel" ]; # Enable ‘sudo’ for the user.
  # };

  # This value determines the NixOS release with which your system is to be
  # compatible, in order to avoid breaking some software such as database
  # servers. You should change this only after NixOS release notes say you
  # should.
  system.stateVersion = "19.09"; # Did you read the comment?


I am not really sure of what happens here, base on the comment of the guide :

Always use the by-id aliases for devices, otherwise ZFS can choke on imports.

I have the feeling that this might be related. How is the disk connected in the virtual machine settings ? It might be related to the virtual controller.

It is a virtual disk of type SCSI which is the default and recommended option when creating a virtual machine with VMWare Workstation Pro 15. It works perfectly with NixOS and ext4 file system.

Shouldn’t that be done automatically, by way of fileSystems."/" having type = "zfs";?

Definitely don’t do that. I wouldn’t be surprised if that’s the cause of your issue. Any of the symlinks in /dev/disk is better than /dev/sda.

1 Like

While configuring my own NAS with a ZFS raidz pool, I discovered something which could be related to your issue.

When you create the ZFS filesystem from the install environment, it has a flag containing the identifier of the “last system that mounted it”. When rebooting for the first time, the OS refuses to mount the partition. I had to wait for the emergency shell to be dropped, then import it manually with the -f (force) flag.
It rewrites the last imported id, and it works perfectly after that. I do not know if there is a way for resetting this flag before rebooting while installing NixOS. I believe it could be something like zfs export but I do not know for sure.

I do not have many symlinks when installing:

[root@nixos:~]# l /dev/disk/
total 0
drwxr-xr-x  6 root root  120 Nov 18  2019 .
drwxr-xr-x 18 root root 3.7K Nov 18  2019 ..
drwxr-xr-x  2 root root   60 Nov 18  2019 by-id
drwxr-xr-x  2 root root   60 Nov 18  2019 by-label
drwxr-xr-x  2 root root   80 Nov 18  2019 by-path
drwxr-xr-x  2 root root   60 Nov 18  2019 by-uuid

[root@nixos:~]# l /dev/disk/by-id/
total 0
drwxr-xr-x 2 root root  60 Nov 18  2019 .
drwxr-xr-x 6 root root 120 Nov 18  2019 ..
lrwxrwxrwx 1 root root   9 Nov 18  2019 ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 -> ../../sr0

[root@nixos:~]# l /dev/disk/by-label/
total 0
drwxr-xr-x 2 root root  60 Nov 18  2019 .
drwxr-xr-x 6 root root 120 Nov 18  2019 ..
lrwxrwxrwx 1 root root   9 Nov 18  2019 NIXOS_ISO -> ../../sr0

[root@nixos:~]# l /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root  80 Nov 18  2019 .
drwxr-xr-x 6 root root 120 Nov 18  2019 ..
lrwxrwxrwx 1 root root   9 Nov 18  2019 pci-0000:00:07.1-ata-2 -> ../../sr0
lrwxrwxrwx 1 root root   9 Nov 18  2019 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda

[root@nixos:~]# l /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root  60 Nov 18  2019 .
drwxr-xr-x 6 root root 120 Nov 18  2019 ..
lrwxrwxrwx 1 root root   9 Nov 18  2019 1970-01-01-00-00-01-00 -> ../../sr0

There is only one symlink in /dev/disk/by-* that points to /dev/sda: /dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0. I tried to install with this one anyway but I have the same the result unfortunately.

It seems that I do not have access to the emergency shell. After pressing space bar (option *) to ignore the error and continue) 2 consecutive times, the machine crashes:

Could you please try to export the pool before rebooting ? Base on the manpage :

Exports the given pools from the system. All devices are marked as exported, but are still considered in use by other subsystems. The devices can be moved between systems (even those of different endianness) and imported as long as a sufficient number of devices are present.

What happens if you try to export the pool from the live environment ?

I exported the pool before rebooting but the result is the same.

[root@nixos:~]# zpool list
rpool    79G  1.20G  77.8G        -         -     0%     1%  1.00x    ONLINE  /mnt

[root@nixos:~]# zpool export rpool 
umount: /mnt: target is busy.
cannot unmount '/mnt': umount failed

[root@nixos:~]# umount /mnt/boot 

[root@nixos:~]# umount /mnt 

[root@nixos:~]# zpool export rpool 

[root@nixos:~]# reboot now

I had a similar issue when trying to install NixOS on a ZFS root yesterday and solved it by creating an LVM logical volume on the virtual hard disk’s partition.

LVM lvs have a permanent path in /dev/ and that made it work for me.

It’s weird that zpool import doesn’t scan all blockdevs and/or somehow doesn’t notice the virtual hard drives, I’d report that as a bug.

Btw you probably have mounts nested under /mnt/, you have to unmount those before /mnt/ or use umount -R.

Thanks. This worked for me as well.

That being said, the behavior in nixos is interesting, I have always used partuuids to create my zpools on linux and this is the first time I have seen this weirdness with vmware.

You can also set

boot.zfs.devNodes = "/dev/disk/by-path";

So, due to wanting to move the /nix directory to a different dataset I ended up re-installing.

Instead of using lvm I set boot.zfs.devNodes = "/dev/disk/by-partuuid"; and it worked perfectly.

Setting boot.zfs.devNodes option to either "/dev/disk/by-path" or "/dev/disk/by-partuuid" solved the problem. Thank you all.


I had the same problem, same solution. Should this be the default?