Podman uid mapping

Hello Everyone,

I’m trying to to run tubearchivist in a podman container similar to here: Package request: Tube archivist · Issue #248397 · NixOS/nixpkgs · GitHub. Deviating I would like to run everything in a pod.

There is a known issue with the elastic search container where the mount needs to have certain permission (owned by 1000, Package request: Tube archivist · Issue #248397 · NixOS/nixpkgs · GitHub). That works in general (e.g. using tmpfiles) but I don’t want to do this as the user with the uid of 1000 has a different purpose on my machine. I thought it would be as easy as mapping the uids but it fails with the following error:

Mar 29 13:06:46 nixserve podman[3538661]: 2025-03-29 13:06:46.174801782 +0100 CET m=+0.052239157 container create 3ad2b2dbca497b07789b1ffb723de516822ede4e4950f1dc4c57a903818b1a78 (image=docker.io/bbilly1/tubearchivist-es:latest, name=tubearchivist-es, pod_id=15c408119d432d9714eb5a941f4a62b9afc1380a37969054c8d22ecf0ee86670, org.opencontainers.image.documentation=https://www.elastic.co/guide/en/elasticsearch/reference/index.html, PODMAN_SYSTEMD_UNIT=podman-tubearchivist-es.service, org.label-schema.license=Elastic-License-2.0, org.label-schema.usage=https://www.elastic.co/guide/en/elasticsearch/reference/index.html, org.label-schema.vendor=Elastic, org.label-schema.vcs-ref=747663ddda3421467150de0e4301e8d4bc636b0c, org.label-schema.version=8.17.2, org.opencontainers.image.title=Elasticsearch, org.opencontainers.image.revision=747663ddda3421467150de0e4301e8d4bc636b0c, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.created=2025-02-05T22:10:57.067596412Z, org.opencontainers.image.source=https://github.com/elastic/elasticsearch, org.label-schema.schema-version=1.0, org.label-schema.name=Elasticsearch, org.opencontainers.image.url=https://www.elastic.co/products/elasticsearch, org.label-schema.url=https://www.elastic.co/products/elasticsearch, org.label-schema.vcs-url=https://github.com/elastic/elasticsearch, org.opencontainers.image.vendor=Elastic, org.opencontainers.image.licenses=Elastic-License-2.0, org.opencontainers.image.version=8.17.2, org.label-schema.build-date=2025-02-05T22:10:57.067596412Z)
Mar 29 13:06:46 nixserve podman[3538661]: 2025-03-29 13:06:46.176355344 +0100 CET m=+0.053792719 container restart 8b5ae9855a061691aa05df402f54c0cf7469cd2ce5204fef5b00154a3d464a22 (image=localhost/podman-pause:5.4.1-315532800, name=15c408119d43-infra, pod_id=15c408119d432d9714eb5a941f4a62b9afc1380a37969054c8d22ecf0ee86670, io.buildah.version=1.39.2)
Mar 29 13:06:46 nixserve podman-tubearchivist-es-start[3538661]: time="2025-03-29T13:06:46+01:00" level=error msg="Starting some container dependencies"
Mar 29 13:06:46 nixserve podman-tubearchivist-es-start[3538661]: time="2025-03-29T13:06:46+01:00" level=error msg="\"mounting storage for container 8b5ae9855a061691aa05df402f54c0cf7469cd2ce5204fef5b00154a3d464a22: creating overlay mount to /var/lib/containers/storage/overlay/6f582f556cc4f224de23b786e317d247338b581af4d3f944e54642fc7e419631/merged, mount_data=\\\"lowerdir=/var/lib/containers/storage/overlay/6f582f556cc4f224de23b786e317d247338b581af4d3f944e54642fc7e419631/mapped/0/l/7F25NP3P6CAA36RY4OZRDNX3AV,upperdir=/var/lib/containers/storage/overlay/6f582f556cc4f224de23b786e317d247338b581af4d3f944e54642fc7e419631/diff,workdir=/var/lib/containers/storage/overlay/6f582f556cc4f224de23b786e317d247338b581af4d3f944e54642fc7e419631/work\\\": permission denied\""
Mar 29 13:06:46 nixserve podman[3538661]: 2025-03-29 13:06:46.188393775 +0100 CET m=+0.065831150 container remove 3ad2b2dbca497b07789b1ffb723de516822ede4e4950f1dc4c57a903818b1a78 (image=docker.io/bbilly1/tubearchivist-es:latest, name=tubearchivist-es, pod_id=15c408119d432d9714eb5a941f4a62b9afc1380a37969054c8d22ecf0ee86670, org.label-schema.schema-version=1.0, org.label-schema.usage=https://www.elastic.co/guide/en/elasticsearch/reference/index.html, org.opencontainers.image.url=https://www.elastic.co/products/elasticsearch, org.opencontainers.image.source=https://github.com/elastic/elasticsearch, org.label-schema.license=Elastic-License-2.0, org.opencontainers.image.documentation=https://www.elastic.co/guide/en/elasticsearch/reference/index.html, PODMAN_SYSTEMD_UNIT=podman-tubearchivist-es.service, org.opencontainers.image.revision=747663ddda3421467150de0e4301e8d4bc636b0c, org.label-schema.vcs-url=https://github.com/elastic/elasticsearch, org.opencontainers.image.ref.name=ubuntu, org.opencontainers.image.version=8.17.2, org.opencontainers.image.title=Elasticsearch, org.opencontainers.image.vendor=Elastic, org.label-schema.version=8.17.2, org.label-schema.build-date=2025-02-05T22:10:57.067596412Z, org.label-schema.vcs-ref=747663ddda3421467150de0e4301e8d4bc636b0c, org.label-schema.vendor=Elastic, org.label-schema.url=https://www.elastic.co/products/elasticsearch, org.opencontainers.image.licenses=Elastic-License-2.0, org.label-schema.name=Elasticsearch, org.opencontainers.image.created=2025-02-05T22:10:57.067596412Z)
Mar 29 13:06:46 nixserve podman-tubearchivist-es-start[3538661]: Error: starting some containers: internal libpod error
Mar 29 13:06:46 nixserve podman[3538661]: 2025-03-29 13:06:46.141438836 +0100 CET m=+0.018876221 image pull 3e32bff3faa48989d5d6f063d07188c342387cfe04c8c4ad7db8900e0fa86c6d docker.io/bbilly1/tubearchivist-es
Mar 29 13:06:46 nixserve systemd[1]: podman-tubearchivist-es.service: Main process exited, code=exited, status=126/n/a
░

My config:

   users.users.${user} = {
      isSystemUser = true;
      group = "${user}";
      uid = 994;
    };
    users.groups.${user} = {};

    systemd.tmpfiles.rules = [
      "d ${dataDir} 750 ${user} ${user}"
      "d ${dataDir}/elasticsearch 750 ${user} ${user}"
    ];

    systemd.services.create-tubearchivist-pod = {
      serviceConfig.Type = "oneshot";
      wantedBy = let
        backend = config.virtualisation.oci-containers.backend;
      in [
        "${backend}-tubearchivist.service"
        "${backend}-tubearchivist-es.service"
        "${backend}-tubearchivist-redis.service"
      ];
      wants = ["network-online.target"];
      after = ["network-online.target"];
      script = ''
        ${pkgs.podman}/bin/podman pod exists ${user} || \
          ${pkgs.podman}/bin/podman pod create \
            -n ${user} \
            -p '127.0.0.1:${builtins.toString cfg.port}:8000' \
            --uidmap="1000:${builtins.toString config.users.users.${user}.uid}:1"
      '';
    };

    virtualisation.oci-containers.backend = "podman";
    virtualisation.podman.defaultNetwork.settings.dns_enabled = true;

    # need elastic api reporting v8
    virtualisation.oci-containers.containers.tubearchivist-es = {
      image = "docker.io/bbilly1/tubearchivist-es";
      environment = {
        ES_JAVA_OPTS = "-Xms1g -Xmx1g";
        "xpack.security.enabled" = "true";
        "discovery.type" = "single-node";
        "path.repo" = "/usr/share/elasticsearch/data/snapshot";
      };
      volumes = ["${dataDir}/elasticsearch:/usr/share/elasticsearch/data"];
      extraOptions = [
        "--pod=tubearchivist"
      ];
    };

I already checked that the uids match in the pod creation script and the user itself.

id -u tubearchivist
994
cat /nix/store/a4igl9hvjgy190ldvdrhqnp5d3k3kchs-unit-script-create-tubearchivist-pod-start/bin/create-tubearchivist-pod-start
#!/nix/store/ki4if6b0w5bqv8dc5lrjp8xm7wjy9dlf-bash-5.2p37/bin/bash
set -e

/nix/store/wqfhmaiwzj5awp9fkfnivv71n3dvwwqc-podman-5.4.1/bin/podman pod exists tubearchivist || \
  /nix/store/wqfhmaiwzj5awp9fkfnivv71n3dvwwqc-podman-5.4.1/bin/podman pod create \
    -n tubearchivist \
    -p '127.0.0.1:8040:8000' \
    --uidmap="1000:994:1"

I also tried just maping the uids on the container itself but then it complains that this is not possible when using a pod.

Found the solution myself. It works without mapping by just using --user:

So this works:

      extraOptions = [
        "--pod=tubearchivist"
        "--user=${builtins.toString config.users.users.${user}.uid}"
      ];

My experience with this sort of thing is stale, but still possibly helpful.

Seeing --uidmap="1000:994:1" seems incompatible with bind-mapping a uid:994 host directory into the container. You might try simply removing that parameter (or changing it to 994:994:1).

If you can open a shell into the container, use ls -l to see the ownership of the mapped directory. Then you can go back to podman and fiddle with config bits to make it be what you want. tubearchivist does not need to be running for you to explore how the filesystem is configured.

I hope this helps, or someone else responds with more correct guidance.

1 Like

@ericgundrum, thank you for your reply. I mean I found a way but still the other way should have worked as well or at least I would like to understand why it did not. As far as I understood it is --uidmap="container:local:amount". That is why I used 1000 for the container uid because it was set in the image as the user.

I think --uidmap="container:local:amount" might be represented better as --uidmap="in-container:from-local:amount", if I recall correctly.

The uidmap parameter makes local uid 994 appear as uid 1000 within the container. This would be bad if the user within the container is uid 994.

Long ago I chose a personal uid for my user on hosts I manage, (before uid mapping existed). This makes it easier to avoid conflicts across hosts. I was tired of having to decipher file ownership whenever I attached to a host which thought my uid was different.

Yes, I think we align here. My local id is 994 and the id need in the container is 1000. But somehow this did not work.

Sorry, I misunderstood the original problem.

Using --user sets the uid of the main execution process within the container – probably making the tubearchivist user commands run as uid 994 within the container, avoiding the need for uid mapping in this case.

What actually happens with --uidmap gets complicated when the container execution is not by root. (Details are described in man podman-create.) Possibly you encountered this.

There is an extra layer for mapping when executing by non-root users. Using a shell to explore within the container runtime makes it easier to see how things like uid may not be what you expect.

No worries.

I’m new to podman but from what I understood the above config should run the container as root. Is that correct?

I do not know how NixOS or systemd will run a container. Discovering that is near the top of my list whenever I get back to needing podman. I’d love to know what you find out.

Unfortunately most of my experience is with a user running the container manually as needed. That is a lot of why I learned about bind-mounting host directories with uid mapping, and connecting to a shell in the container to debug my frequent errors.

Ultimately this lead to useful ways to specify consistent tooling for colleagues running on whatever host OS they wanted (including Windoz). All they needed was a basic podman/docker runtime (and a local gnu-make) – all the rest just worked.

It was super useful for a dev to run locally exactly what ran on a CI server. They could find and fix their own problems much more quickly.