Cannot set file attributes for '/var/empty'

Hi all,

I have some containers running on Proxmox that are deployed via a flake from my local machine as such:

nixos-rebuild switch --fast --flake .#<hostname> --target-host <hostname> --build-host <hostname> --use-remote-sudo

Today I found the following message in the log:

setting up tmpfiles
Cannot set file attributes for '/var/empty', value=0x00000010, mask=0x00000010, ignoring: Operation not permitted

It seems to come from this commit:
https://github.com/NixOS/nixpkgs/commit/59e37267556eb917146ca3110ab7c96905b9ffbd

Not sure if this is related to my local config or a oversight in the commit.
Any hint or advise what to do next?

Ah, and trying to do it on the machine itself gives:

λ sudo chattr +i /var/empty
chattr: Operation not permitted beim Setzen der Flags in /var/empty

I think that is the root cause, for me that just works.

That’s at least good to know.

So, I have my host managing the mounts for the containers. This results in something like that:

λ mount
data/subvol-301-disk-0 on / type zfs (rw,xattr,posixacl)
data/subvol-301-disk-1 on /nix type zfs (rw,xattr,posixacl)

Is that maybe part of my issue?

The old activationScript called chattr with the -f flag which ignores most errors. Can you check when you call chattr -f if the error disappears?

It’s suppressed but still there (✗ is the error_symbol from starship prompt):

✗ sudo chattr -f +i /var/empty

Ok, the old script also called chattr -f || true, so it ignored it if it failed. Changing the attr then has almost certainly also failed in the past (probably because of your bind mounts). Now systemd-tmpfiles complains about it.

Well, to wrap up:
My containers are unprivileged. Hence the required capability CAP_LINUX_IMMUTABLE to execute chattr +i ... is not granted to any user in the container. So, the error makes total sense (and was always there, but not visible).

1 Like

Do you have boot.isContainer yet to true?

Not explicitly, but via

imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
  ];

it should be: https://github.com/NixOS/nixpkgs/blob/65b1aa884878bf7a1558328c821bd661c0986467/nixos/modules/virtualisation/proxmox-lxc.nix#L54C1-L54C1

and interesting configuration, can ‘we’ see it?

Sure. The config is in several files that are imported into the flake. I copied everything together and removed a lot of comments to reduce the noise:

{
  description = "NixOS configuration";

  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable-small";
  };

  outputs = inputs @ { self, nixpkgs,  ... }: let
    inherit (self) outputs;
  in {
    nixosConfigurations = {
      # juno - lxc on proxmox
      juno = let
        system = "x86_64-linux";
      in
        nixpkgs.lib.nixosSystem {
          inherit system;
          pkgs = nixpkgs.legacyPackages.${system};
          specialArgs = {inherit inputs outputs;};

          modules = [
            (
              { modulesPath, pkgs, ... }: {
                imports = [
                  (modulesPath + "/virtualisation/proxmox-lxc.nix")
                  ( # configuration.nix
                    { config, inputs, lib, pkgs, ... }: {
                      time.timeZone = lib.mkDefault "Europe/Berlin";
                      i18n.defaultLocale = lib.mkDefault "de_DE.UTF-8";
                      nix = {
                        registry = lib.mapAttrs (_: value: {flake = value;}) inputs;
                        nixPath = lib.mapAttrsToList (key: value: "${key}=${value.to.path}") config.nix.registry;
                        settings = {
                          experimental-features = ["nix-command" "flakes"];
                          trusted-users = ["root" "@wheel"];
                        };
                      };

                      services.openssh = {
                        enable = true;
                        settings = {
                          PermitRootLogin = lib.mkDefault "no"; # disable root login
                          PasswordAuthentication = false; # disable password login
                        };
                      };

                      security.pam.enableSSHAgentAuth = true;
                      services.getty.autologinUser = lib.mkDefault "root";
                      console.enable = true;

                      systemd.services."getty@" = {
                        unitConfig.ConditionPathExists = ["" "/dev/%I"];
                      };

                      security.sudo.wheelNeedsPassword = false;
                      security.sudo.execWheelOnly = true;

                      users.users = {
                        root.password = "nixos";
                        ptweety = {
                          isNormalUser = true;
                          description = "ptweety";
                          extraGroups = ["wheel" "systemd-journal"];
                          openssh.authorizedKeys.keys = [  "ssh-ed25519 ..." ];
                        };
                      };
                    }
                  )
                  ( # hardware-configuration.nix
                    {lib, ...}: {
                      imports = [];

                      boot.initrd.availableKernelModules = ["nvme" "xhci_pci" "ahci"];
                      boot.initrd.kernelModules = [];
                      boot.kernelModules = ["kvm-amd"];
                      boot.extraModulePackages = [];

                      swapDevices = [];

                      networking.interfaces.eth0.useDHCP = lib.mkDefault true;

                      nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
                      powerManagement.cpuFreqGovernor = lib.mkDefault "ondemand";
                    }
                  )
                ];

                networking = {
                  hostName = "juno"; # Define your hostname.
                  hostId = "xxxxxxxx"; # $ head -c 8 /etc/machine-id
                };

                system.stateVersion = "23.05";
              }
            )
          ];
        };
    };
  };
}

Oh i see,

I thought you were ‘provisioning’ proxmox via nix, which would be very cool!

You may be interested in a cool little shell script that does lots of cool stuff on top of this.

so , it’s basically doing what you do, but we a few extra features, and is probably the coolest shell script on earth right now. (that may not be true).

Thank you. This is actually a very interesting solution and it shows me who much I still have to learn about the nix ecosystem …

As a workaround I’m mounting the nixos container in the host and updating /var/empty:

root@pve-1:~# pct mount 102
mounted CT 102 in '/var/lib/lxc/102/rootfs'
root@pve-1:~# chattr +i /var/lib/lxc/102/rootfs/var/empty
root@pve-1:~# pct unmount 102

It’s OK for my case since I only have to do this once each for a few containers.