OpenGL defaults to llvmpipe instead of dGPU

Hello,

I’m running a laptop with a AMD iGPU and a Nvidia dGPU, when I run OpenGL programs like Minecraft the chosen GPU is listed as “lllvmpipe”, I have tried to use “hardware.nvidia.prime.offload” instead of Sync and that does fix the OpenGL problem but it breaks Vulcan games, they become laggy stutter and freeze when they loose focus.

I have seen multiple other non-NixOS users with simular problems but I have been unable to find a fix that works for me.

When OpenGL dosen’t work I get this command output:

$nix-shell -p virtualgl --command 'glxinfo | grep "OpenGL render"'
pci id for fd 5: 10de:25a2, driver (null)
pci id for fd 6: 10de:25a2, driver (null)
pci id for fd 7: 10de:25a2, driver (null)
glx: failed to create dri3 screen
failed to load driver: nvidia-drm
OpenGL renderer string: llvmpipe (LLVM 21.1.7, 256 bits)

Cleaned and annotated expert of my configuration.nix:

  services.xserver.enable = true;
  services.xserver.videoDrivers = ["nvidia"];

  hardware.graphics.enable = true;

  hardware.nvidia = {
    dynamicBoost.enable = true;
    modesetting.enable = true;
    powerManagement.enable = false;
    powerManagement.finegrained = false;

    open = false; #I have tried both, dosn't seam to make a diffrence

    nvidiaSettings = true;
    package = config.boot.kernelPackages.nvidiaPackages.stable;

    prime = {
      #OpenGl use llvmpipe instead but AMD iGPU fin with DRI_PRIME=1
      sync.enable = true; 

      # OpenGL reads AMD iGPU fine but dGPU is still llvmpipe instead
      # Vulcan seams to use dGPU regardless
      # reverseSync.enable = true;

      # OpenGL actualy uses dGPU when used with 'nvidia-offload'
      # Vulcan games freakout within a minute of opening them
      # offload = {
      #   enable = true;
      #   enableOffloadCmd = true;
      # };

      amdgpuBusId = "PCI:6:0:0";
      nvidiaBusId = "PCI:1:0:0";
    };
  };
  
  services.displayManager.sddm.enable = true;
  services.desktopManager.plasma6.enable = true;

1 Like

Edit: Done, sorry about the confusion.

So, firstly, this is what all the prime stuff means:

offload
: This renders everything on the iGPU by default, but allows you to use nvidia-offload to choose specific applications to render on the GPU.

sync
: Effectively disables the iGPU for rendering. It’s different from disabling the iGPU on a hardware level since it still permits using the multiplexer for efficient memory sharing, which is necessary to allow you to render to your internal display, at least on some hardware.

reverseSync
: Effectively disables the dGPU for rendering. Same as sync, except that instead of allowing you to render to the internal display with the dGPU, this allows rendering to external displays with the iGPU (since the display ports are usually on the dGPU).

Reading this you might realize that for the most part, unless you are being a bit of a weirdo, you almost certainly want to use offload.

You can basically discard all the experimentation with the other two modes.

Actually, you can throw away all the experimentation, since it literally does nothing on wayland. But we’ll get to that.


Let’s also get your overall nvidia config into a clean state. This is what I would suggest, with links to the nvidia docs added for reference so you can read up on why and what everything actually does:

# nvidia.nix
#
# Put this in a new file and replace all the lines
# you shared above with
# ```
# imports = [ ./nvidia.nix ];
# ```
{
  services.xserver.videoDrivers = ["nvidia"];

  hardware.nvidia = {
    # Double check that your firmware actually
    # supports this.
    #
    # See: https://download.nvidia.com/XFree86/Linux-x86_64/595.58.03/README/dynamicboost.html
    dynamicBoost.enable = true;

    # The latest drivers deprecate this option:
    # https://download.nvidia.com/XFree86/Linux-x86_64/595.58.03/README/powermanagement.html
    #
    # So... This may or may not change in the near
    # future. Also officially not experimental
    # anymore, so don't copy people's comments.
    powerManagement.enable = true;

    # This is totally unrelated to the `.enable` version,
    # and enabled by default in the firmware. NixOS
    # disabling this just wastes your battery life.
    #
    # This is the doc for it, if you want to understand
    # why I say so:
    # https://download.nvidia.com/XFree86/Linux-x86_64/595.58.03/README/dynamicpowermanagement.html
    powerManagement.finegrained = true;

    # Does make a difference, even if it doesn't affect
    # your issue. The proprietary variant is all but
    # deprecated, don't use it if you don't have an
    # ancient GPU.
    #
    # See: https://download.nvidia.com/XFree86/Linux-x86_64/595.58.03/README/kernel_open.html
    open = true;

    # Since you're using plasma, the entire `prime`
    # module does zilch.
    #
    # No, seriously, it's literally just Xorg configuration:
    # https://download.nvidia.com/XFree86/Linux-x86_64/595.58.03/README/primerenderoffload.html
    #
    # Wayland simply has native multi-GPU support
    # baked into the compositor. Which GPU is used
    # by default depends on your firmware, but can
    # be overridden with environment variables or
    # udev rules.
    #
    # Your firmware seems to quite sensibly use your
    # iGPU by default, which is why `nvidia-offload`
    # works.
  };
  
  # All of these are technically plasma config and
  # have nothing to do with nvidia, put it in a
  # different file.
  #
  # Yes, including the first line; that's only necessary
  # because of a bug in the plasma module which
  # may or may not even be there anymore.
  hardware.graphics.enable = true;
  services.displayManager.sddm.enable = true;
  services.desktopManager.plasma6.enable = true;
}

This is all a bit besides the point though; your issue is most likely totally unrelated to your nvidia config.

If you use nixGL or nix-ld, now is the time to tell us.

As you say, opengl works fine; vulkan games not working properly seems like a different issue. Might reach from nvidia driver bugs all the way to hardware issues.

Use vkcube and vulkaninfo from the vulkan-tools package to get some debug info.

Hang on, you probably don’t want to use virtualgl; that is intended for running opengl remotely. Dunno if this affects anything. I think the correct glxinfo is in mesa-demos.

Thank you for the help so far.
I have tried to make changes you suggested to my config, minus powerManagement.finegrained = true, it complains that it requires offload enabled. The result of these changes has resulted in the disappearance of plasma (x11) from sddm, I used to use that one.

The good news:
Either because of the changes or wayland I get the following:

$nix-shell -p mesa-demos --command 'DRI_PRIME=1 glxinfo | grep -i “opengl render”'
pci id for fd 5: 10de:25a2, driver (null)
pci id for fd 6: 10de:25a2, driver (null)
pci id for fd 7: 10de:25a2, driver (null)
glx: failed to create dri3 screen
failed to load driver: nvidia-drm
OpenGL renderer string: zink Vulkan 1.4(NVIDIA GeForce RTX 3050 Laptop GPU (NVIDIA_PROPRIETARY))

And Vulcan games seams to work fine.

The bad news:
Wayland seams to break my monitor setup; I have three monitors:

  1. The laptop’s builtin
  2. An external monitor connected via usb-c to the laptop
  3. An external monitor connected via display-link to nr. 2

With wayland nr. 1 works fine, nr. 2 is black and not registered in display setting and nr. 3 is registered but blinks on and off and the resolution seams wrong.

I have used plasma wayland before with this setup without problem, but I think it was after I upgraded to plasma 6 or maybe a different flake update some time ago it broke in this way and I have used X11 sense.

Well that’s annoying. Without that option enabled you don’t get the udev rules that are documented in the readme, which means your GPU cannot turn off properly.

You can set up the udev rule yourself:

{ pkgs, lib, ... }: {
  services.udev.packages = [
    (let
       name = "80-nvidia-pm.rules";
     in
       pkgs.writeTextFile {
         inherit name;
         destination = "/lib/udev/rules.d/${name}";

         text = ''
           # Enable runtime PM for NVIDIA VGA/3D controller devices on driver bind
           ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto"
           ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto"

           # Disable runtime PM for NVIDIA VGA/3D controller devices on driver unbind
           ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on"
           ACTION=="unbind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on"
         '';

         checkPhase = ''
           ${lib.getExe' pkgs.systemd "udevadm"} verify $out/lib/udev/rules.d/${name}
         '';
      }
    )
  ];
}

Ah, I’d assumed you’d moved to wayland. For what it’s worth, the X11 variant will be (is?) deprecated, it’s worth adopting wayland anyway.

But if you wanted to stick to X11, this line used to enable that:

This will of course also require adding back the prime settings.

Fair enough, I suppose we should figure out how to fix that.

I can imagine a couple ways this could be going wrong. It could be a plasma bug, nvidia driver bug, or something about your specific hard/firmware is causing issues.

Could you:

  • Tell us your exact laptop model?
  • Share the logs just after you login?
    • Ideally both journalctl --boot --user -e and journalctl --boot -e.
  • Open the KDE display settings and check the display settings?
    • A screenshot of this would also be helpful.

I was aware that wayland was at least the preferred of the two for Plasma and used to use it for that reason, I don’t mind using wayland if we can fix the display problem. I also don’t mind using offload if it is easier or required for this, it could also be that this requirement is fix in a newer version, I haven’t updated in a while.

I also just remembered that sddm has the same behavior with monitors as wayland has, as in only working for 1.5 of the 3, that should rule out it being a wayland problem (unless sddm uses wayland, I don’t know what protocols it uses if any). Again sddm did not always have this behavior, I think, but it has been there for a long time, I just never think of it as I only see it for 10 seconds. I remember fiddling with these settings when I installed sddm originally, so you might find them relevant:

  boot.initrd.availableKernelModules = ["nvme" "xhci_pci" "usb_storage" "usbhid" "sd_mod" "sdhci_pci"];
  boot.initrd.kernelModules = [];
  boot.kernelModules = ["kvm-amd" "nvidia"];
  boot.extraModulePackages = [];

As for the things you requested:
Laptop model (from about system in settings, from before any of the mationed changes):

  • Processors: 16 × AMD Ryzen 7 5800HS with Radeon Graphics
    Memory: 16 GiB of RAM (15,0 GiB usable)
    Graphics Processor 1: NVIDIA GeForce RTX 3050 Laptop GPU
    Graphics Processor 2: AMD Radeon Graphics
    Manufacturer: ASUSTeK COMPUTER INC.
    Product Name: ROG Zephyrus G15 GA503QC_GA503QC
    System Version: 1.0

Journalctl -eb: journalctl -eb - Pastebin.com

journalctl -eb –user: journalctl -eb --user - Pastebin.com

Display settings: