Rootless podman setup with Home Manager

Hello,

After countless of hours tinkering I managed to get rootless Podman pods working. I am sharing my config in case it helps someone.

In this example I have setup Unifi Network Application based on LinuxServer.io’s image. The example defines a Podman network, pod and two containers running in the pod.

**Important**: You have to use Home Manager for Rootless Podman containers/pods. Using systemd.user.services won’t work. It took me quite long time to learn that (see NixPkgs GitHub for details).

This example is for inspiration only. You should not copy-paste it since my configuration choices might not be what you intend to do.

Enable Podman

System-wide Podman configs and addition of newuidmap and sh to users’ Systemd units PATH variable.

# services/podman.nix

{ config, pkgs, inputs, ... }:
{
  virtualisation =  {
    containers.enable = true;
    oci-containers.backend = "podman";
    podman = {
      enable = true;
      autoPrune.enable = true;
      # Create a `docker` alias for podman, to use it as a drop-in replacement
      dockerCompat = true;
      # Required for containers under podman-compose to be able to talk to each other.
      defaultNetwork.settings.dns_enabled = true;
    }; # podman

    containers.storage.settings = {
      storage = {
        driver = "btrfs";
        runroot = "/run/containers/storage";
        graphroot = "/var/lib/containers/storage";
        options.overlay.mountopt = "nodev,metacopy=on";
      }; # storage
    };
  }; # virtualisation

  users.groups.podman = {
    name = "podman";
  };

  environment.systemPackages = with pkgs; [
    dive # look into docker image layers
    podman
    podman-tui   # Terminal mgmt UI for Podman
    passt    # For Pasta rootless networking
  ];

  # Add 'newuidmap' and 'sh' to the PATH for users' Systemd units. 
  # Required for Rootless podman.
  systemd.user.extraConfig = ''
    DefaultEnvironment="PATH=/run/current-system/sw/bin:/run/wrappers/bin:${lib.makeBinPath [ pkgs.bash ]}"
  '';
}

Rootless Podman service config

The example is a working setup of Unifi Network Application using Home Manager’s services.podman.networks and services.podman.containers options. The application runs in a pod that has two containers: MongoDB database and the web application itself.

Home Manager does not have an option for pods. Therefore I have created a Systemd service to create a pod.

# services/unifi.nix
{ config, lib, pkgs, ... }:

{
  ## Create a user "unifi" and a group for it 
  
  users.groups.unifi = {
    name = "unifi";
    gid = 987;  # I setup UID/GID manually since I refer to those later
  };

  users.users.unifi = {
    group = "unifi";
    uid = 1001;
    linger = true;   # Required for the services start automatically without login
    isNormalUser = true;
    description = "Unifi Controller";
    extraGroups = [ "podman" ];
    shell = pkgs.bash;
  };
 
  ## Allow "unifi" user to use Home Manager
  nix.settings.allowed-users = ["unifi"];

  ## Set up secrets
  
  # I use nix-sops for secrets management:
  # https://github.com/Mic92/sops-nix
  
  # /run/secrets/unifi/mongodb/admin
  sops.secrets."unifi/mongodb/admin" = {
    sopsFile =  ./secrets/unifi.yaml;
    mode = "0440";
    owner = "root";
    group = config.users.groups.unifi.name;
  };

  # /run/secrets/unifi/mongodb/unifi
  sops.secrets."unifi/mongodb/unifi" = {
    sopsFile =  ./secrets/unifi.yaml;
    mode = "0440";
    owner = "root";
    group = config.users.groups.unifi.name;
  };
  
 # /run/secrets-rendered/unifi/db-env
 # The ${config.sops.placeholder."unifi/mongodb/admin"} is the admin 
 # password set with nix-sops. 
 sops.templates."unifi/db-env" = {
    content = ''
        TZ=Etc/UTC
        MONGO_INITDB_ROOT_USERNAME=admin
        MONGO_INITDB_ROOT_PASSWORD=${config.sops.placeholder."unifi/mongodb/admin"}
        MONGO_USER=unifi
        MONGO_PASS=${config.sops.placeholder."unifi/mongodb/unifi"}
        MONGO_HOST=unifi-db
        MONGO_DBNAME=unifi
        MONGO_AUTHSOURCE=admin
      '';
    mode = "0440";
    owner = "root";
    group = config.users.groups.unifi.name;
  };
  
 # /run/secrets-rendered/unifi/app-env
 sops.templates."unifi/app-env" = {
    content = ''
        TZ=Etc/UTC
        PUID=${toString config.users.users.unifi.uid}
        PGID=${toString config.users.groups.unifi.gid}
        MONGO_USER=unifi
        MONGO_PASS=${config.sops.placeholder."unifi/mongodb/unifi"}
        MONGO_HOST=unifi-db
        MONGO_PORT=27017
        MONGO_DBNAME=unifi
        MONGO_AUTHSOURCE=admin
        MEM_LIMIT=1024
      '';
    mode = "0440";
    owner = "root";
    group = config.users.groups.unifi.name;
  };

  ## Write MongoDB config

  environment.etc."unifi/mongo/mongod.conf" = {
    user = "unifi";
    group = "unifi";
    mode = "0660";
    text =  ''
      # mongod.conf
      systemLog:
        destination: file
        path: /var/log/mongodb/mongod.log
        logAppend: true
        logRotate: rename
    '';
  };

  ## MongoDB init script to create "unifi" user
  # see https://github.com/linuxserver/docker-unifi-network-application
  
  environment.etc."unifi/mongo/init-mongo.sh" = {
    user = "unifi";
    group = "unifi";
    mode = "0550";
    text = ''
      #!/bin/bash
      if which mongosh > /dev/null 2>&1; then
        mongo_init_bin='mongosh'
      else
        mongo_init_bin='mongo'
      fi
      "''${mongo_init_bin}" <<EOF
      use ''${MONGO_AUTHSOURCE}
      db.auth("''${MONGO_INITDB_ROOT_USERNAME}", "''${MONGO_INITDB_ROOT_PASSWORD}")
      db.createUser({
        user: "''${MONGO_USER}",
        pwd: "''${MONGO_PASS}",
        roles: [
          { db: "''${MONGO_DBNAME}", role: "dbOwner" },
          { db: "''${MONGO_DBNAME}_stat", role: "dbOwner" }
        ]
      })
      EOF
    '';
  };

  ## Use systemd.tmpfiles to setup directories and Btrfs subvolumes
  
  systemd.tmpfiles.rules = [
    # ''v "${config.users.users.unifi.home}/mongo" 0770 unifi unifi''
    ''d "/etc/unifi" 0770 unifi unifi''
    ''d "/etc/unifi/mongo" 0770 unifi unifi''
    ''v "${config.users.users.unifi.home}/db" 0770 unifi unifi''
    ''v "${config.users.users.unifi.home}/log" 0770 unifi unifi''
    ''v "${config.users.users.unifi.home}/app" 0770 unifi unifi''
  ];

  ## User configs with Home Manager
  
  home-manager.users.unifi = { pkgs, ... }: {
    programs.bash.enable = true;
    services.podman = {
      enable = true;
      autoUpdate.enable = true;
     };
     
    ## Create network for the pod
    # This might not be needed, but I needed it to get DNS working 
    # My firewall config necessiates the subnet definition
    
    services.podman.networks = {
      unifi = {
        driver = "bridge";
        description = "Unifi Network Application network";
        extraPodmanArgs = [
          "--dns=192.168.2.1"
        ];
        subnet = "192.168.128.0/24";
      };
    };

    ## Define pod "unifi" for the service
    # There isn't a Home Manager option for a pod so I create a Systemd service manually

    systemd.user.services.pod-unifi= {
        Unit = {
          Description = "Start podman 'unifi' pod";
          Wants = ["network-online.target"];   # This might be ignored
          Requires = [ "podman-unifi-network.service" ];
          After = ["network-online.target"];   # This might be ignored
        };
        Install = {
          WantedBy = [
            "podman-unifi-network-application.service"
            "podman-unifi-db.service"
          ];
        }; # Install
        Service = {
          Type = "forking";          
          ExecStartPre = [
	        # This is needed for the Pod start automatically
            "${pkgs.coreutils}/bin/sleep 3s"
            # Port config see:
            # https://help.ui.com/hc/en-us/articles/218506997-Required-Ports-Reference    
            # The image requires `--userns=host`
            ''
              -${pkgs.podman}/bin/podman pod create --replace \
                --network=unifi \
                --userns=host \
                --cpus=3 \
                --label=PODMAN_SYSTEMD_UNIT="pod-unifi.service" \
                -p 192.168.3.1:8443:8443/tcp \
                -p 192.168.4.1:1900:1900/udp \
                -p 192.168.4.1:3478:3478/udp \
                -p 192.168.4.1:5514:5514/udp \
                -p 192.168.4.1:10001:10001/udp \
                -p 192.168.4.1:6789:6789/tcp \
                -p 192.168.4.1:8080:8080/tcp unifi
            ''
          ]; # ExecStartPre
          ExecStart = "${pkgs.podman}/bin/podman pod start unifi";
          ExecStop = "${pkgs.podman}/bin/podman pod stop unifi";
          RestartSec = "1s";
        }; # Service
    }; # systemd.user.services.pod-unifi

	## Define containers to run in the pod 

    services.podman.containers = {

      ## MongoDB container
      
      unifi-db = {
        image = "docker.io/library/mongo:7";
        description = "Start Unifi Network Application database (podman)";

        extraConfig = {
          Unit = {
            Wants = ["network-online.target"];  # This might be ignored
            Requires = [ "pod-unifi.service" ];
            After = [ "pod-unifi.service" ];
          };
          Install = {
            RequiredBy = ["podman-unifi-network-application.service" ];
          };
        };  # extraConfig

        autoStart = true;
        autoUpdate = "registry";
        extraPodmanArgs = [
          "--pod=unifi"         # Run the container in the pod "unifi"
          "--group-add=keep-groups"
        ];
        environmentFile = [ "${config.sops.templates."unifi/db-env".path}" ];
        volumes = [
          "/etc/unifi/mongo/mongod.conf:/etc/mongod.conf:ro"
          "/etc/unifi/mongo/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro"
          "${config.users.users.unifi.home}/db:/data/db"
          "${config.users.users.unifi.home}/log:/var/log/mongodb"
        ];
      }; # unifi-db

     ## Application container
     
	 unifi-network-application = {
        image = "lscr.io/linuxserver/unifi-network-application:latest";
        description = "Start Unifi Network Application (podman)";

        extraConfig = {
          Unit = {
            Wants = ["network-online.target"];   # This might be ignored
            Requires = [ "pod-unifi.service" "podman-unifi-db.service" ];
            After = ["network-online.target" "pod-unifi.service" "podman-unifi-db.service" ];
          };
          Install = {
            WantedBy = [ "default.target" ]; # 
          };
        }; # extraConfig

        autoStart = true;
        autoUpdate = "registry";
        extraPodmanArgs = [
          "--pod=unifi"
          "--group-add=keep-groups"
        ];
        # Pass passwords etc. to the container through environment
        environmentFile = [ "${config.sops.templates."unifi/app-env".path}" ];

        volumes = [
          "${config.users.users.unifi.home}/app:/config"
         ]; # volumes
      }; # unifi-network-application

    }; # services.podman.containers

    # The state version is required and should stay at the version you
    # originally installed.
    home.stateVersion = "24.11";
  };  # home-manager.users.unifi

} # EOF
2 Likes

FYI, this line

 nix.settings.trusted-users = [ "unifi" ];

essentially gives the user root access, so it’s not exactly a rootless setup anymore. From the nix manual:

Adding a user to trusted-users is essentially equivalent to giving that user root access to the system. For example, the user can access or replace store path contents that are critical for system security.

That said, this shouldn’t really be needed for the user to use home-manager, I’m not sure why you had to add it…

2 Likes

Thanks for the correction!

I confused nix.settings.trusted-users with nix.settings.allowed-users. Fixed.

Hello. Thank you for the notes. I use overlay instead of btrfs for podman containers (bc of ext4 filesystem). I am totally a noob in this, but i tried changing driver = "btrfs"; to driver = "overlay";
Surely that was too good to be true to just work, but i was wondering if you had a solution for that or a more detailed doc to reference to.

but when i change to overlay it works like a charm for the root user, but not for the non root user, i simpley get this error when running podman ps on a non root user:

ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "btrfs" from database - delete libpod local files ("/home/nonrootuser/.local/share/containers/storage") to resolve.  May prevent use of images created by other tools 
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "btrfs" from database - delete libpod local files ("/home/nonrootuser/.local/share/containers/storage") to resolve.  May prevent use of images created by other tools 
Error: configure storage: "/home/nonrootuser/.local/share/containers/storage/btrfs" is not on a btrfs filesystem: prerequisites for driver not satisfied (wrong filesystem?)

Of course if wanted more information dont hesitate to ask :slight_smile:
Have a great day !

Thank you.

EDIT:

I got it to work, ofc to delete the /home/nonrootuser/.local/share/containers/storage folder, but when i declare home-manager service like service.podman.containers<name> (ant other configs) it dont start automaticly under the correct namespace.

Personally I’ve not used this method, but I can say I’ve been using this for rootless podman quadlet successfully

Sorry for not replying earlier. I had my own set of problems caused by a dying (well, now-dead) PSU…

Podman docs have instructions how to reset Podman when changing the storage settings, but I can see you found those already.

In my example (that should not be copy-pasted), the containers use Pods and namespace is set at Pod-level (the --userns=host param given to podman pod create). For my purposes the host works the best since I create separate users for each service.

From Podman docs:

–userns=mode

Key Host User Container User
“” $UID 0 (Default User account mapped to root user in container.)
host $UID 0 (Default User account mapped to root user in container.)
keep-id $UID $UID (Map user account to same UID within container.)
auto $UID nil (Host User UID is not mapped into container.)
nomap $UID nil (Host User UID is not mapped into container.)