[RFC 0163] Portable Service Layer


Excited to see this work maturing. Hopefully it’ll help unify some currently-fragmented effort.

I’m desperately waiting for another init/service supervisor fully working on NixOS (I know the rfc is not directly targeting at this), but as this comment mentioned, some developers still don’t want to lose control over how systemd service forms. So how do we create a balance between these two expressions?

IMO when some systemd specific settings appear, we can reuse some parts of systemd code to provide a binary that acts as the middle ground between common init/supervisors and systemd config file. Is this technically possible?

Like we do with other platforms. The abstraction layer should know if some “general feature” (hardening, etc.) can be expressed in the implementation layer, if it’s not, sometimes it can be downgraded/degraded (just not using hardening options), sometimes the user care about this and prefer having an evaluation error about it, etc.

The work must systematically generalize whatever is the “upper bound” of features and try to move stuff from init-specific feature to the generic init featureset.


what about a single file to define an interface for the foo module, as well as any implementation that isn’t specific to any backend, like this:

$ cat module.nix
{ config, pkgs, lib, ... }: with lib;
  cfg = config.services.foo;
  # interface common to all backends: systemd, supervisord, sysvinit, etc...
  options.services.foo = {
    package = mkOption {
      type = types.package;
      default = pkgs.foo;

    user = mkOption {
      type = types.str;
      default = "foo";

    group = mkOption {
      type = types.str;
      default = "foo";

    configFile = mkOption {
      type = types.lines;

  # implementation common to all backends:
  config = {
    services.foo.configFile = ''
      # settings common to all backends

    users.users.foo = mkIf (cfg.user == "foo") {
      isSystemUser = true;
      group = cfg.group;

    user.groups.foo = mkIf (cfg.group == "foo") { };

and then for each backend you can create a separate implementation as well as any implementation specific options, like this:

{ config, pkgs, lib, ... }:
  cfg = config.services.foo;
  # interface specific to systemd backend
  options.services.foo = {
    doSomethingSystemdSpecificLikeSystemdHardeningForExample = mkOption {
      type = types.bool;
      default = false;

  # implementation specific to systemd backend
  config = {
    services.foo.configFile = ''
      # systemd specific settings

    systemd.services.foo = {
      description = "foo service";
      wantedBy = [ "multi-user.target" ];

      serviceConfig = {
        ExecStart = "${cfg.package}/bin/foo --config ${cfg.configFile}";
        User = cfg.user;
        Group = cfg.group;
      } // cfg.doSomethingSystemdSpecificLikeSystemdHardeningForExample {
        ProtectHome = true;
        # etc...

from there other people can optionally write other backends in supervisord.nix, etc…

realistically i think if you try to push a higher level of abstraction than this (like say an abstraction over systemd vs other init systems as opposed to an abstraction over the module itself like my example shows) we end up losing the control we want to retain - yes, this is more work… but that work falls on the shoulders of people who want to maintain these other backends :man_shrugging:

In this case, we’ll have to at least create a layer that achieves basic dependency-based and event-driven features (I guess it’s fairly easy to write some scripts with fixed format to deal with corner cases). But systemd is going far beyond these functionalities. It tries to converting scripts into its own deterministic configurations as much as possible. Taking these options in [UNIT] section as an example:

           Check for the existence of a file. If the specified absolute path
           name does not exist, the condition will fail. If the absolute path
           name passed to ConditionPathExists= is prefixed with an exclamation
           mark ("!"), the test is negated, and the unit is only started if
           the path does not exist.

           ConditionPathExistsGlob= is similar to ConditionPathExists=, but
           checks for the existence of at least one file or directory matching
           the specified globbing pattern.

           ConditionPathIsDirectory= is similar to ConditionPathExists= but
           verifies that a certain path exists and is a directory.

           ConditionPathIsSymbolicLink= is similar to ConditionPathExists= but
           verifies that a certain path exists and is a symbolic link.

           ConditionPathIsMountPoint= is similar to ConditionPathExists= but
           verifies that a certain path exists and is a mount point.

           ConditionPathIsReadWrite= is similar to ConditionPathExists= but
           verifies that the underlying file system is readable and writable
           (i.e. not mounted read-only).

           ConditionPathIsEncrypted= is similar to ConditionPathExists= but
           verifies that the underlying file system's backing block device is
           encrypted using dm-crypt/LUKS. Note that this check does not cover
           ext4 per-directory encryption, and only detects block level
           encryption. Moreover, if the specified path resides on a file
           system on top of a loopback block device, only encryption above the
           loopback device is detected. It is not detected whether the file
           system backing the loopback block device is encrypted.

           ConditionDirectoryNotEmpty= is similar to ConditionPathExists= but
           verifies that a certain path exists and is a non-empty directory.

           ConditionFileNotEmpty= is similar to ConditionPathExists= but
           verifies that a certain path exists and refers to a regular file
           with a non-zero size.

           ConditionFileIsExecutable= is similar to ConditionPathExists= but
           verifies that a certain path exists, is a regular file, and marked

And they can still be needed options to correctly operate the service, rather than optional ones that doesn’t affect overall functionality. How can we replace them when we are using simpler init/supervisors, such as runit or openrc?

  1. Letting service maintainers avoid these rarer options. They’ll not be happy as expressed above.
  2. writing workaround scripts (and throw them into overrides or something) for other init/supervisors every time these options appear. This looks stupid but will probably be more acceptable.
  3. writing workaround scripts to mimic each of these options. It also looks dirty and I don’t think others may accept it.

So yeah,

And even my assumption (said above) may not come true.

The example you cite though can trivially be converted to a Bash script that performs exactly what is said, right? This is the problem of folks using another init have to fix.

If a program requires a certain path to be available, using another init would have required to fix it in the script or whatever.

So for me, if your service manager does not support that type of feature, then “here be dragons” and that’s it.

Also, all the challenge for the other service managers is to find a way to make all of this conversion composable and works. Maybe, not all services uses 99% of systemd features and you can get away with a lot, but that’s the deal.

Some people say that nixpkgs is an amazing software collection because it has very few policies, this is another instance of this. No one is preventing from using s6 for running programs, but you have to pay the tax of not using systemd yourself if the program you want to run heavily uses systemd features, maybe by contributing to s6 and finding a way to keep the s6 culture and be more and more compatible to systemd. Previous Nix contributors tried it and I believe this is a matter of effort and dedication, I wish the best to people doing this because monoculture is bad but I also think it’s herculeen work.