Nix-native k3s auto-pruning of manifests?

I am working an a hobby project where I remotely deploy a mixed architecture n-node k3s cluster with Colmena and the NixOS services.k3s options. I have zero experience with k8s/k3s or any type of orchestrator, but it was actually surprisingly simple to get working (please do roast me for anything stupid you see).

I define the nodes name, IP (target), architecture (system), role, and the path of the node-specific hardware.nix. These are function inputs for mkNode which generates configurations based on some criteria (for example, manifests only need to be deployed to role = server ). For a pure x86 setup its working quite nicely so far, and i’ll get my hands on some arm machines for testing soon.

However, there is something that I’m not sure how to proceed with:

services.k3s.manifests only deploys manifests. Setting the enable flag to false does not prune them, it only prevents deployment.

The only solution I can think of would be to write an alias that imperatively prunes manifests during deployment - something like colmena build && colmena apply && colmena exec --{some prune command}, but I heavily dislike the idea of imperatively pruning, when everything else is set up to deploy declaratively. I would also think that it would cause additional friction if i were to share these configs with someone else. Does anyone have an idea how to do this within nix?

Unfortunately I haven’t implemented secrets management yet, so I won’t share my repo. But here are the main working bits:

{ self, nixpkgs, colmena, ... }@inputs:


let
  manifestsModule       = import ./modules/manifests.nix { lib = nixpkgs.lib; };
  nodes                 = import ./nodes.nix;
  fleetStateVersion     = "25.05";
  controllerSystem      = "x86_64-linux";
  
  serverNodes = builtins.filter (n: n.role == "server" || n.role == "both") nodes;
  firstServer = builtins.head serverNodes;


  mkNode = n: {
    name  = n.name;
    value = { name, lib, pkgs, ... }:
      let
        isServer     = (n.role == "server") || (n.role == "both");
        isAgent      = (n.role == "client") || (n.role == "both");
        apiURL       = "https://${firstServer.target}:6443";
        iAmBootstrap = isServer && (name == firstServer.name);
      in
      {
        # required for longhorn to work
        systemd.tmpfiles.rules = [
          "L+ /usr/local/bin - - - - /run/current-system/sw/bin/"
        ];
        
        networking.hostName  = name;
        nixpkgs.hostPlatform = n.system;
        system.stateVersion  = lib.mkDefault (n.stateVersion or           fleetStateVersion);


        deployment.targetHost    = n.target;
        deployment.buildOnTarget = (n.system != controllerSystem);
        deployment.tags          = [ "node" n.role n.system ];


        imports = [
          ./modules/node.nix
          ./modules/locales.nix
          n.hardware
        ];

        # required for longhorn to work
        services.openiscsi = {
          enable = true;
          name = "iqn.2016-04.com.open-iscsi:${name}";
        };


        services.k3s = lib.mkIf (isServer || isAgent) (
          let role = if isServer then "server" else "agent";
          in {
            enable = true;
            inherit role;


            token = "redacted :)";


            serverAddr  = lib.mkIf (!iAmBootstrap) apiURL;
            clusterInit = iAmBootstrap;


            extraFlags = lib.mkIf isServer (toString [
              "--write-kubeconfig-mode 0644"
              "--disable local-storage"
              "--disable servicelb"
            ]);


            manifests = lib.mkIf isServer manifestsModule.manifests;


          }
        );


        networking.firewall = {
          enable = true;
          allowedTCPPorts = [ ];
          allowedUDPPorts = [ ];
          allowedTCPPortRanges = [ { from = ; to = ; } ];
          allowedUDPPortRanges = [ { from = ; to = ; } ];
        };
      };
  };
in
{
  meta.nixpkgs = import nixpkgs { system = controllerSystem; };


}
//
builtins.listToAttrs (map mkNode nodes)

As per k3s docs, you can set the --disable flag to disable specific components.

{
  services.k3s.extraFlags = [
    "traefik"
  ];
}

You’re already using it, but for builtin components, I think.

Since manifests in k3s.manifests links them into /var/lib/rancher/k3s/server/manifests, --disable should work.

However, it feels kinda weird since you set the manifests in Nix code, and disable them from k3s. I guess it could be useful when fiddling with your cluster, to try and reset things.

Thanks for your reply.

That’s what I thought at first as well, but when I re-read the documentation (and the title of that page in particular), it sounded to me like it only works for packaged components, not my own custom manifests.

Any file in /var/lib/rancher/k3s/server/manifests is considered a component/AddOn.

So even if it’s not built-in, it will work the same way.

I tried it, that doesn’t seem to be the case.