Nix functions produces correct output (I think?) but quadlet-nix doesnt see/recieve any containers

Hi, I am trying to dynamically import modules to make my life easier in the future. I have made a couple of simple functions to help me with this and it usually it works except for this in specific case and I cant figure it out.

I have written this to dynamically import modules from a pathlist:

{config, lib, inputs, ...}:
with lib;
let
  pathList = (import ./../../../helpers/importAllModules.nix).getModulesFromDir {dir = ./containers;};

  importPathList = cfg: lib: paths: boxAttrs: i: if i < (builtins.length paths)
  		   then importPathList cfg lib paths (boxAttrs // ((import (builtins.elemAt paths i)) {lib = lib; cfg = cfg;})) (i + 1)
		   else boxAttrs;

  importBoxes = {cfg, lib, pathList}: importPathList cfg lib pathList {} 0;
  cfg = config.quadlets.containers.streaming;

in {
  options.quadlets.containers.streaming = {
    enable = mkEnableOption "Enables streaming pod and containers";
    gid = mkOption {
      type = types.str;
      example = "994";
      description = "gives group id to pod and containers";
    };
    uid = mkOption {
      type = types.str;
      example = "991";
      description = "gives user id to pod and containers";
    };
    tz = mkOption {
      type = types.str;
      example = "Europe/Amsterdam";
      description = "give timezone to containers for internal timekeeping";
    };
    podName = mkOption {
      type = types.str;
      example = "streaming";
      default = "streaming";
      description = "gives a name to the contianer cluster";
    };
  };
  config = mkIf cfg.enable {
    virtualisation.quadlet = let
      inherit (config.virtualisation.quadlet) networks pods;
    in {
      containers = {} // builtins.trace (builtins.deepSeq (importBoxes {inherit cfg lib; pathList = pathList;}) (importBoxes {inherit cfg lib; pathList = pathList;})) (importBoxes {inherit cfg lib; pathList = pathList;});
    };
  };
}

This is a typical module/container:

{ cfg, lib, ... }:
with lib;
{
  jellyfin-box = {
   autoStart = true;
   unitConfig = {
     Description = "jellyfin-test";
   };
   containerConfig = {
     image = "lscr.io/linuxserver/jellyfin:latest";
     autoUpdate = "registry";
     devices = [
       "/dev/dri:/dev/dri"
     ];
     environments = {
        TZ = cfg.tz;
        GID = cfg.gid;
        UID = cfg.uid;
     };
     volumes = [
       "/data/streaming/media:/media:rw,z"
       "/var/cache/jellyfin:/cache:Z"
       "/etc/container-data/streaming/jellyfin:/config:Z"
     ];
     tmpfses = [
       "/tmp"
     ];
     publishPorts = [
       "0.0.0.0:8096:8096/tcp"
     ];
     networks = [
       "host"
     ];
   } // optionalAttrs (cfg.podName != null) {pod = cfg.podName;};
   serviceConfig = {
     SuccessExitStatus = "0 143";
   };
  };
}

ImportAllModules.nix(I need to rename it) is a function that finds all modules from a specifc directory checks if their modules and returns a directory list. I know this works as I use it elsewhere in my code.

The output of the trace is:

trace: { bazarr-box = «thunk»; flaresolverr-box = «thunk»; jellyfin-box = «thunk»; prowlarr-box = «thunk»; qbittorrent-box = «thunk»; radarr-box = «thunk»; sonarr-box = «thunk»; }

Which on a top level looks correct but then I dont see any containers show up in podman containers ps or as a systemd service.

Maybe its something to do with lazy evaluation that I need to force it to load the variables in before executing? I have tried with some hardcoded modules in a previous test which did seem to work but im unsure if its the culprit since I did it in a separate file from the module file, so maybe its something to do with that.

I dont know what it could be.

Also some tips on how to write nix logic cleanly would be appreciated. Their very messy.

Thanks in advance.

Instead of dynamic module imports, use lib.mkEnableOption options to turn the modules on, or alternate entrypoints that import a different subset of modules. Your approach goes against the design of the module system, and always leads to these headaches.

Thank you for the reply.

For clarification; so the best approach is to separate each container into a module and enable them/import them 1 by 1 or to have it imported elsewhere and added as a variable option in this module?

Are you able to elaborate or point me to a resource as to why this happens like this? I can kind of infer a reason but a clear set of design goals to adhere to would help.

Edit: Im kind of learning as I go.

I’d probably take this approach. “Best” is of course always relative, but this is exactly how the module system is used in NixOS - see e.g. services.jellyfin.

I’d actually go a step further and say the best practice would be to ditch containers and use NixOS modules to begin with - NixOS modules generally use the systemd “hardening” features that also place these applications in cgroups, just like containers, so using OCI images instead just gives you a usability penalty and maintenance overhead that would otherwise be managed entirely by NixOS. I appreciate that jump is a bit further, though :wink:

So, the “dynamic import” temptation I think comes from an imperative mindset - you think of “importing something” as an action you perform - like importing a python module, with some kind of side effect that results in the module being “imported” and therefore now in use.

This does not fit into the very functional design behind NixOS. NixOS merges all the modules that are imported into a large set, figures out which options are ultimately set by combining and overriding values from different modules, and then computes a big script out of this merged superset that ultimately performs the deployment.

By attempting to make things dynamic, you go against the very design of the module system - this means option priority no longer works, and generally NixOS’ composition features end up broken. It’ll work, but it’s simply the wrong architectural approach.

There is some documentation around this, but it’s definitely the worst-documented part of the NixOS ecosystem (and yet one of IMO the most impressive and well-designed!). The nixpkgs manual has some, as does nix.dev.


Giving my best attempt to explain what I think is the best way to architect NixOS configurations (heavily opinionated, I’ll admit), let’s take your jellyfin container as an example. I can see two slightly different solutions.

Firstly, consider why you would like it to be optional. If it should not always be present, is there a reason for this? Perhaps you’d like to have one configuration to rule all your machines, but only deploy jellyfin on a subset of them.

In either case, I consider it best practice to have per-host entrypoints. Each host with a meaningfully different configuration (note that you can have multiple hosts share one configuration, if they are supposed to be functionally equal - though this does take a little bit of network engineering so you get e.g. runtime hostname assignments and dynamic IP resolution - at this point we’re getting into cattle/cluster territory, though, which takes far more engineering knowledge than I’m going to be able to summarize in a single post) should get a different entrypoint.

  • One approach is to simply use the presence of the modules as a switch for whether they should be enabled or not (i.e., do nothing crazy at all):

    # hosts/host1.nix
    {
      imports = [
        ./containers/jellyfin.nix
      ];
    }
    
    # hosts/host2.nix
    {
      imports = [
        ./containers/nextcloud.nix
      ];
    }
    

    The real strength of the module system when used like this becomes apparent when two hosts share one module. Say one of your hosts lives in a different timezone:

    # hosts/host3.nix
    {
      imports = [
        ./containers/jellyfin.nix
      ];
    
      # Imagining a module interface here, you don't share your intended API
      virtualisation.quadlet.jellyfin.containerConfig.environments.TZ =
        lib.mkForce "Europe/Berlin";
    }
    

    So with this approach, the import isn’t dynamic, but it is explicit, and each host has a specific set of imports.

  • If, however, you have a lot of these, and you really want them to be enabled on most hosts, you could consider a NixOS-style module design, and add a .enable option to each:

    # hosts/host2.nix
    {
      imports = [
        # The `default.nix` of the containers module imports *all* containers,
        # possibly via `readDir`
        ./containers
      ];
    
      # Assuming that we have `jellyfin` disabled by default; you could of course
      # inverse this, depending on what the intent is.
      virtualisation.quadlet.jellyfin.enable = false;
    }
    

In either case, your modules would be designed the other way around. You’d have one module (say containers/default.nix) which specifies an interface for how its submodules can be configured, and has some generic config attribute which iterates over any defined submodules to set up a quadlet for them.

Then for each actual implementation, you have a separate module which implements this interface - depending on how much the configurations vary, you can just set the options which need to be set for that service in that module.

To get a bit of a feel for how this could work, I’d for example point you at the prometheus exporters in nixpkgs. The options search is here. You’ll see that it’s quite easy to express a very configurable interface that results in some rather nice inheritance and exposes options in a way that are very flexible when used downstream.

2 Likes

Holy sh*t thanks,

My issue was I had a lot of the same variables that I wanted to set for a large group of containers/pod. Hence why I tried to import them and set the variables dynamically. This would allow me to automate setting these up especially if they relied on other sections of the nixos configuration to be set. Like in this example I needed a specific user for only this group/pod.

The goal was at some point to make my life easier setting up future machines with just a couple of changes.

but as you said I could ditch containers entirely and rely on cgroups. I was reading up on it this morning. I dont know much about interacting with them directly but I knew that nixos modules somehow did something with cgroups but I couldnt find anything concrete on how exactly they worked in the nixos configuration context.
The explanations I did find were very abstract. A bit like your explanation(not that it isnt good info!) but Im missing a ‘how’ or a ‘what’ happens explanation.

Its one of the reasons I stuck with containers. The other reason was that I wanted to manage them separately from the base system. I didnt look into if this was possible just using nixos but I just went with what I knew.


So from your explanation it would be best if I just imported the containers the dumb way without variables and then overwrite attributes or to ditch containers entirely and just rely on the nixos modules using it in a similar way.

Oh, well, that’s just done using systemd features, see here: https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#Sandboxing

If you want to know what settings a specific module does enable, you’ll have to look at the source for that module. For example, jellyfin sets these options: nixpkgs/nixos/modules/services/misc/jellyfin.nix at 9cb344e96d5b6918e94e1bca2d9f3ea1e9615545 · NixOS/nixpkgs · GitHub