Disko: still getting trace warning after updating my config?

On some VPS that hadn’t been updated for some time I got the warning:

warning: Git tree '/home/jeroen/devel/g1tlab/1nnovatio/nixos-servers' is dirty
trace: warning: The legacy table is outdated and should not be used. We recommend using the gpt type instead.
Please note that certain features, such as the test framework, may not function properly with the legacy table type.
If you encounter errors similar to:
"error: The option `disko.devices.disk.disk1.content.partitions."[definition 1-entry 1]".content._config` is read-only, but it's set multiple times,"
this is likely due to the use of the legacy table type.

However, after updating according to the docs (i.e. partitions [ … ] → partitions { … } and changing some type attributes) the error still persists.

My config looks like this:

{
  disko.devices = {
    disk = {
      vda = {
        device = "/dev/vda";
        type = "disk";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              type = "EF00";
              start = "1MiB";
              end = "500MiB";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            };
            root = {
              start = "500MiB";
              end = "100%";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
          };
        };
      };
    };
  };
}

Did I miss something obvious?

hmm, maybe I did something wrong, will recheck later :slight_smile:

1 Like

I tried to reproduce this but was unable to. Can you maybe give some more hints what exaclty you did and is your config somewhere public available?

I took a look at it again, and noticed that when I was rebuilding one host from the flake that didn’t even use disko I also saw the error. Which stumps me because I thought nix is lazy???

In other words: I have multiple hosts in the same flake that use disko, and some don’t. When I changed the disko config mentioned above to get rid of the error, I only changed one host to start with and the error persisted. Now it seems that the error is not related to the rebuild of that particular host, but caused by other hosts not yet having been updated?

If that’s the case, it’s not an issue with disko, but with (my understanding of) the (non-lazy???) evaluation of nix code in a multi-host flake…

It would be helpful in that case if the trace produced in the warning would reference the top level flake attribute from which it was called (if possible?).

While I’m at it: I’m a little bit concerned what might happen to the partitioning of the existing hosts after updating all disko configs. I mean the changes I am going to make are purely syntactical and do not constitute an actual change in desired state, but what guarantee do I have that a nixos-rebuild switch is not going to reformat my partitions (e.g. if I make a mistake in the reconfig)? Or is the worst that can happen that the generated fstab is incorrect and I just need to reboot a previous generation?

Just to confirm: indeed after updating the disko configs for all hosts in the flake, the warnings are gone. Until now I’ve only done the rebuids (without switch) due to above concerns.

As an illustration below a slightly more complex config that also appeared to build ok:

{ config, lib, ... }:

let
  # to avoid reshuffling in complexer hardware situations, the best is to go by ID:
  disks = map (p: "/dev/disk/by-id/" + p) [
    "ata-ST3500418AS_6VM5AVL7"
    "ata-ST18000NM003D-3DL103_ZVT2LVCA"
    "ata-ST18000NM003D-3DL103_ZVT216GS"
    "ata-ST18000NM003D-3DL103_ZVT1VJ36"
  ];

  defFullDiskPartitionPV = devpath: vgname: {
    device = devpath;
    type = "disk";
    content = {
      type = "gpt";
      partitions = {
        fulldisk = {
          start = "0%";
          end = "100%";
          content = {
            type = "lvm_pv";
            vg = vgname;
          };
        };
      };
    };
  };
in
{
  # disk is the top level attribute to be assigned to disko.config
  disko.devices = {
    disk = {
      system1 = {
        device = lib.elemAt disks 0;
        type = "disk";
        content = {
          type = "gpt";
          partitions = {
            ESP = {
              type = "EF00";
              start = "1MiB";
              end = "500MiB";
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            };
            swap = {
              start = "500MiB";
              end = "20GiB";
              content = {
                type = "swap";
                randomEncryption = true;
              };
            };
            root = {
              start = "20GiB";
              end = "150GiB";
              content = {
                type = "filesystem";
                format = "ext4";
                mountpoint = "/";
              };
            };
            system = {
              start = "150GiB";
              end = "100%";
              content = {
                type = "lvm_pv";
                vg = "vg_system";
              };
            };
          }; #<partitions
        }; #<content
      }; #<sda
      # ! still best to partition even if spanning the whole disk: https://unix.stackexchange.com/a/76642/395327
      data1 = defFullDiskPartitionPV (lib.elemAt disks 1) "vg_bigdata";
      data2 = defFullDiskPartitionPV (lib.elemAt disks 2) "vg_bigdata";
      data3 = defFullDiskPartitionPV (lib.elemAt disks 3) "vg_bigdata";
    }; #<disk
    lvm_vg = {
      vg_system = {
        type = "lvm_vg";
        lvs = {
          var = {
            size = "200GiB";
            content = {
              type = "filesystem";
              format = "ext4";
              mountpoint = "/var";
            };
          };
          home = {
            size = "50GiB";
            content = {
              type = "filesystem";
              format = "ext4";
              mountpoint = "/home";
            };
          };
        };
      };
      vg_bigdata = {
        type = "lvm_vg";
        lvs = {
          bigdata = {
            size = "100%FREE";
            content = {
              type = "filesystem";
              format = "ext4";
              mountpoint = "/bigdata";
            };
          };
        };
      };
    }; #<lvm_vg
  }; #<devices
}

disko has to be run manually to reformat anything.
Including the disko config in your configuration.nix basically just configures fileSystems.* (and some other settings).
In case a reboot would fail, you could just boot an old generation, as there are no changes made to your disk. The upgrade path from the legacy tables to the new gpt table is sadly not fully automatic, since the new ones uses partlabels and the old one a numbered device schema.

About other hosts and laziness, maybe you reference information from the other hosts config so it has to be evaluated?

Thanks, that confirms the workings and what I expected. So I can upgrade without fear :wink:

As for the evaluation of the other nixosConfiguration attributes: good point: I do have some shared config modules that are imported from most configurations, but for my understanding it still doesn’t explain why the other host configs are being evaluated. In the “mental image” the “expression dependency tree” does have the same root, but down from the host’s config there are no references upward, although I am using some “wrapper” logic (see below)

I apologise upfront for the following digression (I was going to make it public as a multi-config flake “framework”, but still work left to do for that…) :wink: :

...
      nixosConfigurations = (
        import ./nixos-builder.nix {
          inherit (nixpkgs) lib;
          inherit inputs;
        }
      ).getNixosConfigs ./hosts;
...

and for nixos-builder.nix

{ inputs, ... }:
let

  inherit (inputs.nixpkgs) lib;

  mkSysConfig = system: modules:
    if (lib.hasInfix "darwin" system) then
      inputs.darwin.lib.darwinSystem {
        inherit system modules;
        specialArgs = {
        };
      }
    else
      inputs.nixpkgs.lib.nixosSystem {
        inherit system modules;
        specialArgs = {
          inherit inputs system;
        };
      };

  # _getUsers returns a list of import paths containing user definitions
  _getUsers = users:
    map (u: ./. + "/users/${u}.nix") users # user expressions referred by list
    ++ [ (./. + "/users") ]; # add default

  # _getHostConfig reads cfgfile (`hosts/<host>/default.nix`) and returns a nixosSystem from the definitions in it
  _getHostConfig = host:
    let
      hostdef = import (./. + "/hosts/${host}") { inherit inputs; }; # this will return the attributes (for now { system modules users }) from the default host config
      system = hostdef.system;
      allmodules = [
        ./overlays # now as a normal module, that also handles unstable and adding the _module.arg.unstable
      ]
      ++ hostdef.modules ++ _getUsers hostdef.users
      ++ [ inputs.disko.nixosModules.disko inputs.sops-nix.nixosModules.sops ];
    in
    mkSysConfig system allmodules;

in
{
  /*** `getNixosConfigs` should return a valid `nixosConfigurations` attribute set, i.e. as returned by `lib.nixosSystem`
    {
      hostname = lib.nixosSystem {
        inherit system modules;
        specialArgs = { ... }
      }
      ...
      host-x = { ... }
    }

    where `modules` is basically just the `imports = [ configuration.nix ... ]` list.
    That means that any logic to "choose" imports based on known values at this point (hostname, system, definitions in `hosts/<host>/default.nix`)
    can/should be handled here in functions just returning the conditionally determined relevant imports paths which is then passed as the `modules` parameter.
    The same for `users` (also defined in `hosts/<host>/default.nix`)
  */
  getNixosConfigs = machinedir:
      builtins.mapAttrs (n: v: (_getHostConfig n))
        (lib.filterAttrs (n: v: v == "directory") (builtins.readDir machinedir));
}