Help organizing new Flake for multiple systems

Hello! I am new to NixOS, but started on a journey to migrate various systems from being managed by a combination of chef and ansible to NixOS.

Something I’m still struggling with is how to organize everything without duplicating too much code. For my flake.nix, this is what I have come up with so far:

{
  inputs = {
    nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable";
    disko.url = "github:nix-community/disko";
    disko.inputs.nixpkgs.follows = "nixpkgs";
    nixinate.url = "github:matthewcroughan/nixinate";
  };

  outputs = { self, ... } @ inputs:
    with inputs;
    let
      system = "x86_64-linux";
      nixpkgs = inputs.nixpkgs;
      deploy-rs = inputs.deploy-rs;
      pkgs = import inputs.nixpkgs {
        inherit system;
      };
      inherit (deploy-rs.lib) deployModule;
      serverConfig = (import ./lib/server-config.nix { inherit pkgs nixpkgs system deployModule inputs; }).serverConfig;

      myNodes = {
        # These are for use by nixos-anywhere to provision a brand new server
        proxmox-one-disk = { hardwareProfile = "proxmox-one-disk"; roleName = "bare"; };

        # And then these are individual servers.  I need to figure out how to load them from a file or something more dynamic
        nixtest1-01 = { hardwareProfile = "proxmox-one-disk"; hostName = "nixtest1-01"; targetHost = "192.168.5.224"; };
        nixtest1-02 = { hardwareProfile = "proxmox-one-disk"; hostName = "nixtest1-02"; targetHost = "192.168.5.215"; };
        nixtest2-01 = { hardwareProfile = "proxmox-one-disk"; hostName = "nixtest2-01"; targetHost = "192.168.5.227"; };
      };
    in {
      apps = nixinate.nixinate.x86_64-linux self;
      nixosConfigurations = builtins.mapAttrs (name: config: serverConfig config) myNodes;
    };
}

And then in server-config.nix, I have:

{ pkgs, nixpkgs, system, deployModule, inputs, ...}:
{
serverConfig = {roleName ? "default", hostName ? "default-host", hardwareProfile ? "vm", targetHost ? "", sshUser ? "root"}: 
  nixpkgs.lib.nixosSystem {
  inherit system;
  modules = [ 
    ../configuration.nix
    (import ../modules/roles/${roleName}.nix inputs)
    (import ../modules/hardware/${hardwareProfile}.nix inputs)
    (_: {
      networking.hostName = hostName;

      # TODO: make this optional
      # lib.mkIf (targetHost != "")
      _module.args.nixinate = {
        host = targetHost;
        sshUser = sshUser;
        buildOn = "remote"; # valid args are "local" or "remote"
        substituteOnTarget = true; # if buildOn is "local" then it will substitute on the target, "-s"
        hermetic = false;
      };
    })
  ];
};
}

I’ve tried colmena, deploy-rs, and now nixinate. So far, nixinate seems to be the deploy tool that requires the least amount of changes. The other tools I kept ending up with duplicate definitions for each server.

My questions are:

  1. Does what I have so far fall into “standard practices”? i.e. One nixosSystem per server, one “hardwareProfile” for type of server (disk layout, provider, etc), and one roleName for the type of server that includes one or more other modules (enabling docker, nginx, postgres, etc)
  2. I tried using _module.args.nixinate = lib.mkIf (targetHost != "") { and a couple other variations to make nixinate ignore machines without a targetHost defined. Is there a better way to do that? I basically want it to ignore the config profiles that I plan on only using nixos-anywhere with.
  3. Assuming 1. is sane enough, how do I read the list of servers from an external source?
    3b. Ultimately I want to read them from Netbox’s API. Can I make api calls like that from inside nix?
  4. Are there any recommended examples or repos that manage a large number of machines? A lot of the repos I’ve seen so far are usually a handful of personal computers and vms
1 Like
  1. Is subjective question. The nixos.org “standard” would be using NixOps, but it is a dormant (some will say deprecated) project. But I do it with one nixosSystem per machine, every with different imports.

  2. Don’t know. But could solve it with 1 too.

  3. The short answer is no. You will be able to fetchUrl if it is GET request, and you need to know it result hash. Means it cannot be changed. But two other options:
    a. Setup recurring process that download (as json) and commit it. Then read with readFile and fromJSON
    b. Create nix-plugin to do it (example nix-plugins), but is hacky!

  4. Not that I’m aware, because most large number of machines are corps and usually private. Maybe haumea could help you organize your code.

1 / 2. Thanks, I’ll take a look at your repo for some inspiration. I’m probably trying to optimize things too early
3. That makes sense! I’m fine with an external process that updates a json file, I’m going to try testing that out but I think it will help with part of my issue.
4. I’ll take a look at haumea, it looks like it might be useful. If I don’t go the json file route, I could also see maybe having an external script that generates .nix files automatically for each server

I don’t mind having a nix file per server if it makes things easier, but my biggest concern is that I want it to be able to be automated. Right now I’m testing by doing something like: provision vm using terraform → terraform calls nixos-anywhere to bootstrap the vm → manually add information about the new vm to flake.nix → use nixos-rebuild/etc to deploy the “correct” config to the new vm.

Thanks for the suggestion, I’ve been using something like this and it’s been working great for the last few days:

  outputs = { self, ... } @ inputs:
      serverConfig = (import ./lib/server-config.nix { inherit pkgs nixpkgs system deployModule lollypops inputs; }).serverConfig;
      myJsonNodes = builtins.fromJSON (builtins.readFile ./hosts.json);
    in {
      nixosConfigurations = builtins.mapAttrs (name: config: serverConfig config) (myNodes // myBaseProfiles);
    };

I have a separate python script that queries the netbox api and generates hosts.json. Most of my vms are being created by terraform so I just have terraform run that script when needed for now.

1 Like