Persistend naming of iGPU in sway wrapper script

In my laptop I have 2 GPUs (dedicated NVIDIA GPU and integrated AMD GPU). If I don’t need high performance or to have a second monitor, I generally prefer to launch Sway with the integrated gpu to save power. This is accomplished by specifying the WLR_DRM_DEVICES environmental variable to the path of the desired card. For instance, to launch sway with the iGPU:

WLR_DRM_DEVICES=/dev/dri/card1 sway

Being /dev/dri/card1 my iGPU.

To launch Sway with the iGPU using greetd, I have this in my configuration.nix:

  services.greetd = let
    sway-igpu = pkgs.writeShellScriptBin "sway-igpu" ''
      export WLR_DRM_DEVICES=/dev/dri/card1 && exec ${pkgs.sway}/bin/sway
    '';
  in {
    enable = true;
    settings = {
      default_session = {
        command = "${pkgs.greetd.tuigreet}/bin/tuigreet --time --cmd ${sway-igpu}/bin/sway-igpu";
        user = "sh4k0";
      };
    };
  };

Which works as intended… Most of the time. Since the /dev/dri/card* names are non-persistent, it occasionally happens that they change and my script no longer works. A workaround for this would be to use /dev/dri/by-path/* as mapping, which however is not possible since wlroots will interpret the colon symbols as separators (with no way of escaping them).

To fix this, I would like to declaratively specify a symlink, say, from /dev/dri/by-path/my-iGPU:000:01 to, say, /tmp/iGPU, and use that as value for WLR_DRM_DEVICES. How could I do that?
I didn’t fine many resources online on how to declare symlinks in NixOS. Could also well be a xy problem, so feel free to prompt for different solutions.

You can write a udev rule to give it a predictable name. This one should work: dotfiles/nixos-modules/nvidia/prime.nix at master · TLATER/dotfiles · GitHub

1 Like

@TLATER, thank you for your reply.
I extracted the relevant lines and added them to my configuration (I manually added the GPU paths):

  services.udev.packages = lib.singleton (
    pkgs.writeTextDir "lib/udev/rules.d/61-gpu-offload.rules" ''
      SYMLINK=="/dev/dri/by-path/pci-0000:01:00.0-card", SYMLINK+="dri/igpu1"
      SYMLINK=="/dev/dri/by-path/pci-0000:06:00.0-card", SYMLINK+="dri/dgpu1"
    ''
  );

After rebuilding, a corresponding package now appeared in my nix store, containing the above udev rules.
However, the symlinks do not appear anywhere in my system. systemd-udevd is (obviously) up and running, and the logs are clean. Is there something wrong with my snippet above?

The path is explicitly and deliberately relative in the rule.

1 Like

Yes, that was my guess but I wanted to make sure. But unfortunately find /dev -name "igpu1" gives no results. Is there something else that needs to be done to enable specified udev rules?

Apparently udev isn’t applying the rule for some reason. journalctl -u systemd-udevd:

Jul 11 10:20:29 erebus (udev-worker)[788]: nvidia: Process '/nix/store/00zrahbb32nzawrmv9sjxn36h7qk9vrs-bash-5.2p37/bin/bash -c 'mknod -m 666 /dev/nvidiactl c 195 255'' failed with exit code 1.
Jul 11 10:20:30 erebus (udev-worker)[788]: nvidia: Process '/nix/store/00zrahbb32nzawrmv9sjxn36h7qk9vrs-bash-5.2p37/bin/bash -c 'for i in $(cat /proc/driver/nvidia/gpus/*/information | grep Minor | cut -d \  -f 4); do mknod -m 666 /dev/nvidia${i} c 195 ${i}; done'' failed with exit code 1.

I gen easily get rid of the errors by restarting systemd-udevd: in this case logs are clean, but I still don’t see the symlinks in /dev/.
My full NVIDIA config:

{ config, lib, pkgs, ... }:

{
  services.xserver.videoDrivers = [ 
    "amdgpu"
    "nvidia"
  ];
  
  hardware = {
    graphics = {
      enable = true;
    };
    nvidia = {
      open = true;
      modesetting.enable = true;
      powerManagement = {
        enable = false;
        finegrained = false;
      };
      prime = {
        offload = {
          enable = true;
          enableOffloadCmd = true;
        };
        amdgpuBusId = "PCI:1:0:0";
        nvidiaBusId = "PCI:6:0:0";
      };
    };
  };
  services.udev = {
    enable = true;
    packages = lib.singleton (
      pkgs.writeTextDir "lib/udev/rules.d/61-gpu-offload.rules" ''
        SYMLINK=="/dev/dri/by-path/pci-0000:01:00.0-card", SYMLINK+="dri/igpu1"
        SYMLINK=="/dev/dri/by-path/pci-0000:06:00.0-card", SYMLINK+="dri/dgpu1"
      ''
    );
  };
}

(I disabled finegrained powermanagement because enabling it seemingly prevents me from launching sway with the integrated gpu).

1 Like

Yes. Obviously it does not, there are no files called /dev/dev/dri/by-path/pci-0000:01:00.0-card.

The errors you see are a red herring, though, those do not mean that your rules aren’t applied. Those are support for old driver versions or something, I’ve never seen those particular rules work (but they are applied).

You need to remove the /dev/ from your matcher - as I said, that is deliberately omitted.

… or just actually copy my code instead of modifying it incorrectly :stuck_out_tongue:

I’d suggest giving this another shot when your setup starts to settle. It shouldn’t affect wlroots unless it’s still trying to render on the dGPU, and the dGPU isn’t turned on for some reason or such (which I guess it wouldn’t be unless wlroots is controlling it?). Without finegrained powermanagement the dGPU will always drain 10W or so. If your card is particularly old (<=GTX1xxx) it might not work.

Powermanagement should also pretty much always be enabled, without it you’ll have issues with sleep with any half-modern nvidia GPU.

You can also take a look at my module in general. nvidia-vaapi is broken on current driver versions, and I need to add some legacy GPU guardrails, but the setup is otherwise pretty much done, and I left extensive comments with reasoning for most settings, largely based on the nvidia driver’s user manual, some information from helping folks with configuring their GPUs and my own experimentation across multiple machines.

1 Like

Thanks @TLATER, that was it. Just that minor modification to my udev rule, and everything is working accordingly. :slight_smile:

As for “just copying your code instead of modifying it incorrectly”, your EasyNVIDIA module is really well made, but far too complex and “general” for what I actually need (at least for the time being): a setup that works for my machine.

I am just starting out with NixOS (and with NVIDIA), and blindly copy-pasting code from a more experienced user would have perhaps fixed my issue, but I wouldn’t have learned much in doing so. I instead opted for going through your code to extract just what I needed, and it might have taken me (and you) one more step to get things to work, but at least I learned something in the process. :slight_smile:

Thanks for your help and for providing your config repo - It will certainly serve as a good example. :wink:

1 Like

Hah, fair enough, I appreciate the spirit. Just take a closer look at the strings you do copy next time then :wink:

1 Like