Using `flake-utils.lib.eachSystem` for reproducible NixOS + home-manager configurations across CPU arches

About this question/issue

I am trying to use flake-utils.lib.eachSystem for both my nixosConfigurations and homeConfigurations, and I am stuck with attributes missing error as mentioned below.

# please ignore the dirty working directory warnings
[gildedguy@stellapent-cier:/workspaces/ajhalili2006/nixops-config]$ nix run home-manager -- switch --flake .
warning: Git tree '/workspaces/ajhalili2006/nixops-config' is dirty
warning: Git tree '/workspaces/ajhalili2006/nixops-config' is dirty
warning: Git tree '/workspaces/ajhalili2006/nixops-config' is dirty
warning: Git tree '/workspaces/ajhalili2006/nixops-config' is dirty
error: flake 'git+file:///workspaces/ajhalili2006/nixops-config' does not provide attribute 'packages.x86_64-linux.homeConfigurations."gildedguy".activationPackage', 'legacyPackages.x86_64-linux.homeConfigurations."gildedguy".activationPackage' or 'homeConfigurations."gildedguy".activationPackage'

I checked the outputs via nix flake show and confirmed that something gone wrong.

[gildedguy@stellapent-cier:/workspaces/ajhalili2006/nixops-config]$ nix flake show
warning: Git tree '/workspaces/ajhalili2006/nixops-config' is dirty
git+file:///workspaces/ajhalili2006/nixops-config
├───homeConfigurations: unknown
└───nixosConfigurations
    ├───aarch64-linux: NixOS configuration
    └───x86_64-linux: NixOS configuration

Should I rewrite them under the packages attribute as subattrs?

The Why

I want to make both my NixOS and home-manager configurations workable across CPU arches without the pain of abusing Git branches for arch-specific configs, just as I do for devShells via devenv. (For Hack Clubbers reading this, I am getting a Raspberry Pi 5 from High Seas general YSWS program, pending warehouse shipment.)

Additional context

Here’s a draft configuration from my git stash at the moment:

{
  description = "Andrei Jiroh's NixOS and home-manager configurations in one place, seperate from the dotfiles repo";

  # try to be in-sync with the nix-channels
  inputs = {
    # nixpkgs itself
    nixpkgs.url = "https://flakehub.com/f/NixOS/nixpkgs/0.1.0.tar.gz";

    # home-manager
    home-manager = {
      url = "github:nix-community/home-manager/master";
      inputs.nixpkgs.follows = "nixpkgs";
    };

    # Determinate Nix
    determinate = {
      url = "https://flakehub.com/f/DeterminateSystems/determinate/0.1";
      inputs.nixpkgs.follows = "nixpkgs";
    };

    # Community Extras
    nixos-hardware.url = "github:NixOS/nixos-hardware/master";
    vscode-server.url = "github:nix-community/nixos-vscode-server";
    vscode-server.inputs.nixpkgs.follows = "nixpkgs";
    
    # flake utils
    flake-utils.url = "github:numtide/flake-utils";

    # nix-ld
    nix-ld = {
      url = "github:Mic92/nix-ld";
      inputs.nixpkgs.follows = "nixpkgs";
    };
  };

  outputs = {
    self,
    nixpkgs,
    home-manager,
    nixos-hardware,
    determinate,
    vscode-server,
    nix-ld,
    flake-utils,
    systems
  }:
    flake-utils.lib.eachSystem [
      "x86_64-linux"
      "aarch64-linux"
    ] (system:
      let
        pkgs = import nixpkgs { inherit system; };
      in {
        nixosConfigurations = {
          stellapent-cier = nixpkgs.lib.nixosSystem {
            inherit system;
            modules = [
              ./hosts/stellapent-cier/configuration.nix

              # load Determinate Nix and the rest
              determinate.nixosModules.default
              home-manager.nixosModules.home-manager
              vscode-server.nixosModules.default
              nix-ld.nixosModules.nix-ld

              # one-liners?
              { programs.nix-ld.dev.enable = true; }
            ];
          };
          
        };
        homeConfigurations = {
          gildedguy = home-manager.lib.homeConfigurations {
            inherit pkgs;
            modules = [
              ./shared/home-manager/main.nix
              {
                home.username = "gildedguy";
                home.homeDirectory = "/home/gildedguy";
              }
            ];

            extraSpecialArgs = {
              home = {
                useUserPackages = true;
                useGlobalPkgs = true;
              };
            };
          };
        };
      }
    );
}

For the current state, see flake.nix · 32895a192eb2a196f6fbc1c08c2ee57782f8bee5 · Andrei Jiroh Lair / Personal NixOS and home-manager configurations · GitLab (github mirror)

nixosConfigurations and homeConfigurations are not grouped by system, so they should be outside the eachSystem call. You can declare them in a separate attrset and merge them with // like this:

# System-independent outputs.
{
  nixosConfigurations = { ... };
  homeConfigurations = { ... };
}
// # Merge operator
# System-dependent outputs...
flake-utils.lib.eachSystem [ ... ] (system: {
})
1 Like

You’re correct, but there’s more to this question. If you have a bunch of cattle which you want to give identical configs across architectures without repeating yourself that’s not quite enough.

In addition, you’ll want to generate per-arch nixosSystems, somewhat like this:

nixosConfigurations = builtins.listToAttrs (map (system: { 
  name = "stellapent-cier-${system}";
  value = nixpkgs.lib.nixosSystem { ... };
}) flake-utils.lib.defaultSystems);

That said, you will likely find that mapping across architectures isn’t that trivial - it isn’t always for packages, but certainly not for full system configurations. You’ll probably fare better reducing your module entrypoints to just one configuration.nix and then passing in your flake inputs through specialArgs so that you can use imports instead, then the specific definitions of the systems you want to deploy to don’t face anywhere near as much repetition and you can actually encode meaningful differences where they belong. It’s almost like flakes were designed to subtly push you in this direction…

Separating out the modules definitions feels to me like better practice anyway, the way your flake currently looks puts a bunch of logic that is firmly in configuration territory in the project definition file.

And while I’m recommending better practices… Replace this:

pkgs = import nixpkgs { inherit system; };

with this:

pkgs = nixpkgs.legacyPackages.${system};

… and put it where you use it, no need to clobber the whole scope if it’s just an input into your homeConfigurations.

Also, are you certain your home config’s extraSpecialArgs does anything? AIUI those are NixOS module args (they literally do not make sense outside of that context), which should be entirely inert unless you set them in a NixOS module, and certainly not when you try to expose them through the module args: Appendix B. NixOS Configuration Options

1 Like

I am thinking of using the merge operator alongside using specialArgs for hardware-specific configs, but can I pass the hardware part via custom CLI flags similar to a tool I found somewhere (maybe here but forgot the name) (with --impure)? Any thoughts on that?

That seems like it’s eating away at the concept of a “reproducible” config, if you’re allowing the environment to influence the builds.

PS. // is not a merge operator, it’s the “update” operator. It will clobber any previously specified attributes with the same name; there’s no “merging” happening.

2 Likes

Okay, I’ll note and take that advice. @waffle8946

What you should probably do instead is define a nixosConfiguration for each independent piece of hardware (regardless of whether you will deploy multiple machines of that particular type), and add a separate module to modules specifically for that hardware type.

Then you can build the nixosConfiguration for the correct hardware with nixos-rebuild --flake <path>#<hostnameForSpecificHardware>. This stretches the concept of “hostname”, of course, but it’s really just an identifier for the configuration you’re deploying anyway, and should not be an issue as long as you resolve issues with shared hostnames/machine IDs at runtime.

1 Like

I’ll take the #<hostnameForSpecificHardware> but how do I handle the multiarch home-manager config part without having to do binary emulation (like Docker build builds via qemu) outside x86-64?

ps: sorry for the necrobump but I have to do follow-up

What parts of your home-manager config are architecture-dependent?

the pkgs part, aka these lines from the flake template for context: home-manager/templates/standalone/flake.nix at c12dcc9b61429b2ad437a7d4974399ad8f910319 · nix-community/home-manager · GitHub

I see, I’m not familiar with Flakes-based setup for HM.

My suggestion would be the same as TLATER’s: make a homeConfiguration for each machine, even if they reuse the same configuration modules.

Just like a NixOS configuration is tied to a specific machine, including its hardware, so is a Home Manager configuration. I was under impression that Home Manager would use homeConfigurations.<username>@<hostname> if it didn’t find a homeConfigurations.<username>, but it looks like I’m wrong. So you’ll have to use an explicit home-manager switch .#myArchSpecificConfiguration.

Alternatively:

  • Use Home Manager as a NixOS module, rather than standalone. You still have to specify the architecture for the nixosConfigurations, but there would be no homeConfigurations, so that’s one less place to specify the architecture.
  • Use --impure, an environment variable an builtins.getEnv to “pass architecture as an argument” to your flake
  • Use a non-flake setup.

It does, actually: home-manager/home-manager/home-manager at 74f0a8546e3f2458c870cf90fc4b38ac1f498b17 · nix-community/home-manager · GitHub

I’m not sure how to best answer this, honestly. If the configurations are tied to NixOS hosts, you should probably use the NixOS module instead of using homeConfigurations, then you get the host arch for free, as well as better integration.

If they are not, you know your use cases better. It’s possible using a name that includes the target system is necessary if you do need to deploy home configurations to varied architectures, but all machines of all architectures are cattle. It strikes me as weird to deploy home configurations to cattle, though.

Basically, depends on your use case, I’d need to understand what you’re doing better to give a good recommendation.