The deploy.nodes
is definitely annoying, DRY and whatnot; even if the hosts were all identical and used the same nixosConfigurations
entries, you would still have to specify different entries for different hosts.
Luckily it’s also particularly easy to wrap that output into a function; you can replace the entire output with:
deploy.nodes = nixpkgs.lib.mapAttrs (hostname: config: {
inherit hostname;
profiles.system = {
sshUser = "root";
user = "root";
path = deploy-rs.lib.x86_64-linux.activate.nixos config;
};
}) self.nixosConfigurations
… stuff like that only becomes a problem when you have exceptions for specific nodes, because now you have to filter them, and that gets ugly fast.
As for the nixosConfigurations
, I would push more of the module definitions into the module system. Something like:
# flake.nix
heimdall = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
specialArgs.flake-inputs = inputs;
modules = [
./hosts/heimdall/configuration.nix
./modules/servers/general
];
}
# ./modules/servers/general/default.nix
{ flake-inputs, ... }: {
imports = [
flake-inputs.disko.nixosModules.disko
flake-inputs.agenix.nixosModules.default
];
}
# ./hosts/heimdall/configuration.nix
{
imports = [
# Consider putting all heimdall's config in just one of these two files
../../modules/servers/heimdall
];
}
You can of course go further still, and e.g. just automatically infer hostnames from file names, but it’s hard to condense nixosSystem
definitions further without hiding relevant information (this tells you at a glance which hosts there are and what their differences are), so personally I think this is the correct level of abstraction - I’d even consider lifting the level of abstraction a little, and exposing which services each of these hosts run in modules
.
If the number of very similar hosts still annoys you, I’d instead consider reworking the host configurations so that there are no differences between them (hostname assignments can be done with clever network configuration), and just deploy the same configuration to all of them.
If you’re annoyed by having to specify specialArgs.flake-inputs
and system
, you can quite easily define your own nixosSystem
:
# flake.nix
outputs = { self, nixpkgs, ... } @ inputs: {
nixosConfigurations.heimdall = self.lib.nixosSystem [
./hosts/heimdall/configuration.nix
./modules/servers/general
];
lib.nixosSystem = modules: nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
specialArgs.flake-inputs = inputs;
inherit modules;
};
}
… or, retaining more flexibility, even allowing you to override the default system
when needed:
# flake.nix
outputs = { self, nixpkgs, ... } @ inputs: {
nixosConfigurations.heimdall = self.lib.nixosSystem {
modules = [
./hosts/heimdall/configuration.nix
./modules/servers/general
];
};
lib.nixosSystem = args:
nixpkgs.lib.nixosSystem (nixpkgs.lib.recursiveUpdate {
system = "x86_64-linux";
specialArgs.flake-inputs = inputs;
} args);
}
Personally with these kinds of helper functions (as well as the deploy.nodes
) I would place them after the nixosSystem
definitions, no reason to put the library stuff lower than the actual business logic. If your list grows, I think it’s also valid to put them in a separate lib.nix
as long as you keep your function names descriptive, i.e., something like this:
# flake.nix
outputs = { ... } @ inputs: {
lib = import ./lib.nix inputs;
}
# lib.nix
{ nixpkgs, ... } @ inputs: {
nixosSystem = args: nixpkgs.lib.nixosSystem (nixpkgs.lib.recursiveUpdate {
system = "x86_64-linux";
specialArgs.flake-inputs = inputs;
} args);
}