i have been using nixos for over an year now and have a quite decked out config, i was never able to make firefox use my dgpu other than one time where i pinned my driver to 535. but after i updated i couldnt go back to it (the build fails and i have tried many times, including changing kernel to 6.6). not i am on latest driver 570.86.16. the following config took hours out of my life multiple times over the past year, i took snippets from @TLATER and @eljamm.
nvidia-smi
Mon Feb 3 09:20:27 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.86.16 Driver Version: 570.86.16 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3050 ... Off | 00000000:01:00.0 Off | N/A |
| N/A 28C P8 6W / 60W | 12MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3120 G ...me-shell-47.3/bin/gnome-shell 1MiB |
+-----------------------------------------------------------------------------------------+
the vram usage is always stuck on 1MiB for Gnome-Shell and no matter what app i open, Brave, Firefox, Ghostty, Alacritty, it just doesnt popup here, neither does the browser work properly because of it. i get visual glitches in any chromium based browser with h/w accel enabled.
{
config,
lib,
pkgs,
...
}: {
services.xserver.videoDrivers = ["nvidia"];
hardware.graphics = {
enable = true;
enable32Bit = true;
extraPackages = with pkgs; [
nvidia-vaapi-driver
];
};
hardware.nvidia = {
modesetting.enable = true;
powerManagement = {
enable = true;
finegrained = true;
};
dynamicBoost.enable = true;
prime = {
offload.enable = true;
offload.enableOffloadCmd = true;
amdgpuBusId = "PCI:5:0:0";
nvidiaBusId = "PCI:1:0:0";
};
nvidiaSettings = true;
# package = config.boot.kernelPackages.nvidiaPackages.beta;
open = false;
package = config.boot.kernelPackages.nvidiaPackages.mkDriver {
version = "570.86.16"; # use new 570 drivers
sha256_64bit = "sha256-RWPqS7ZUJH9JEAWlfHLGdqrNlavhaR1xMyzs8lJhy9U=";
openSha256 = "sha256-DuVNA63+pJ8IB7Tw2gM4HbwlOh1bcDg2AN2mbEU9VPE=";
settingsSha256 = "sha256-9rtqh64TyhDF5fFAYiWl3oDHzKJqyOW3abpcf2iNRT8=";
usePersistenced = false;
};
};
boot = {
# kernelPackages = lib.mkForce pkgs.linuxKernel.packages.linux_xanmod;
kernelPackages = pkgs.linuxPackages_latest;
kernelParams = lib.mkMerge [
["nvidia-drm.fbdev=1"]
[
"nvidia.NVreg_UsePageAttributeTable=1" # why this isn't default is beyond me.
"nvidia_modeset.disable_vrr_memclk_switch=1" # stop really high memclk when vrr is in use.
]
(lib.mkIf config.hardware.nvidia.powerManagement.enable [
"nvidia.NVreg_TemporaryFilePath=/var/tmp" # store on disk, not /tmp which is on RAM
])
];
blacklistedKernelModules = ["nouveau"];
};
environment = {
systemPackages = (
with pkgs; [
# nvidia-vaapi-driver
# # libva
libva-utils
# nvidia-utils
# # libvdpau-va-gl
# # vaapiVdpau
# # libva-vdpau-driver
nvtopPackages.full
nvidia-container-toolkit
]
);
# sessionVariables = {
# "__EGL_VENDOR_LIBRARY_FILENAMES" = "${config.hardware.nvidia.package}/share/glvnd/egl_vendor.d/10_nvidia.json";
# };
variables = {
NVD_BACKEND = "direct";
LIBVA_DRIVER_NAME = "nvidia";
MOZ_DISABLE_RDD_SANDBOX = "1";
# Required to run the correct GBM backend for nvidia GPUs on wayland
GBM_BACKEND = "nvidia-drm";
# Apparently, without this nouveau may attempt to be used instead
# (despite it being blacklisted)
__GLX_VENDOR_LIBRARY_NAME = "nvidia";
# Hardware cursors are currently broken on wlroots
WLR_NO_HARDWARE_CURSORS = "1";
};
};
# environment.variables = {
# # references -> https://github.com/TLATER/dotfiles/blob/e633196dca42d96f42f9aa9016fa8d307959232f/home-config/config/graphical-applications/firefox.nix#L68
# __NV_PRIME_RENDER_OFFLOAD = 1;
# __NV_PRIME_RENDER_OFFLOAD_PROVIDER = "NVIDIA-G0";
# __VK_LAYER_NV_optimus = "NVIDIA_only";
# # Required to run the correct GBM backend for nvidia GPUs on wayland
# # Apparently, without this nouveau may attempt to be used instead
# # (despite it being blacklisted)
# # GBM_BACKEND = "nvidia-drm";
# # Hardware cursors are currently broken on nvidia
# WLR_NO_HARDWARE_CURSORS = "0";
# # In order to automatically launch Steam in offload mode, you need to add the following to your ~/.bashrc:
# XDG_DATA_HOME = "$HOME/.local/share";
# # from https://github.com/TLATER/dotfiles/blob/e633195dca42d96f42f9aa9016fa8d307959232f/nixos-config/yui/nvidia.nix#L33
# # Necessary to correctly enable va-api (video codec hardware
# # acceleration). If this isn't set, the libvdpau backend will be
# # picked, and that one doesn't work with most things, including
# # Firefox.
# LIBVA_DRIVER_NAME = "nvidia";
# __GLX_VENDOR_LIBRARY_NAME = "nvidia";
# # Required to use va-api it in Firefox. See
# # https://github.com/elFarto/nvidia-vaapi-driver/issues/95
# MOZ_DISABLE_RDD_SANDBOX = "0";
# # It appears that the normal rendering mode is broken on recent
# # nvidia drivers:
# # https://github.com/elFarto/nvidia-vaapi-driver/issues/212#issuecomment-1585584038
# NVD_BACKEND = "direct";
# # Required for firefox 97+, see:
# # https://github.com/elFarto/nvidia-vaapi-driver#firefox
# EGL_PLATFORM = "wayland";
# };
after the reboot it is back to before, nvidia-smi and nvtop reports no use of dgpu. the brightness control is working and the monitor is back to 144hz default. 1 out of like 10 boots is like this, and a hard reset fixes it.
Well, yeah, you’re running with prime, you need to set the offload env vars to run anything on your dGPU.
If you enable the option, you can use nvidia-offload <command> to run your applications on the dGPU.
I’d be more concerned about the inconsistent behavior here, but quite frankly you’ve made a bit of a mess, so it’s not too surprising to me that things will go wrong.
Caveat the commit messages, and this issue. There’s also some stuff still specific to my setup in there, at a glance at least:
You do not need to use the xanmod kernel - though using -latest will cause you pain
The wlroots vars are not universally useful
Your CPU might not support PAT
The comment about my monitor doesn’t apply to anyone else of course
Anything that isn’t set is likely very deliberate, so ideally avoid defining anything else.
I’ve deliberately left very explicit comments in the code explaining why I do what I do; give them - including doc comments - a read. Your config has a bunch of workarounds that are long irrelevant (or baked into the NixOS module by default for the kernels that need it). You also define a bunch of wlroots vars when you use GNOME which probably doesn’t hurt, but is pointless.
Eventually I’ll get a moment to finish up that config, and then hopefully time to figure out how gnome differs from wlroots.
Chances are the current prime settings are completely pointless for most wayland-based compositors, and the random GPU assignment you seem to see probably is the result of that, so you may need to dig into how GNOME picks the GPU to render with. Tell me if you figure that out - and please link to docs - that’ll save me a bunch of effort when I try to generalize the module.
in both of these specializations i have noticed the similar behavior of no app using dgpu, but, when i started a game on steam (on gaming-time , it popped up on nvidia-smi. asap, i rebooted to my default config, and used the offload command to run my browser, and it popped up too.
What i infer from this is; each app needs to be ran with nvidia-offload for them to use the dgpu when using prime.
now what i want is (i am sorry i couldnt put it to words without making it haphazard)
for now, i am going to try to disable prime on my gaming profile and test and report back.
i would try to write an edit for nixos.wifi after this is finished, so that this can be covered in more detail.
You can use your applications’ .desktop files, or steam’s pre-start scripts, to do this if you don’t want to use nvidia-offload explicitly everywhere. That command also just sets those variables before executing the binary in question, by the way.
Please don’t touch the abandoned wiki, use the official one: NVIDIA - NixOS Wiki
You’ll also find that that wiki page already covers this
i’m so fkn r3tarded, i used to think nixos.wiki is the newer one and wiki.nixos.org is the older one, i went the whole year thinking that. now i gotta reread the new stuff.
i looked up for reverseSync, but it didnt have any effect for me (most probably because my laptop doesnt have a MUX switch)
apart from that, i tried setting these in my config using environment.variables, no change
Don’t use the variables globally, place them in the .desktop files of each application you want to have running on the dGPU, and use prime in offload mode
is there a way to set them using the Nix way? it will be a kinda pain in the butt to do it manually for each app and stuff.
on top i use ghostty and it is giving this error:
❯ nvidia-offload ghostty
info: ghostty version=1.0.2-04d3636-nix
info: ghostty build optimize=ReleaseFast
info: runtime=apprt.Runtime.gtk
info: font_backend=font.main.Backend.fontconfig_freetype
info: dependency harfbuzz=8.4.0
info: dependency fontconfig=21402
info: renderer=renderer.OpenGL
info: libxev backend=main.Backend.io_uring
info(os): setlocale from env result=LC_CTYPE=en_US.UTF-8;LC_NUMERIC=en_IN.UTF-8;LC_TIME=en_IN.UTF-8;LC_COLLATE=en_US.UTF-8;LC_MONETARY=en_IN.UTF-8;LC_MESSAGES=en_US.UTF-8;LC_PAPER=en_IN.UTF-8;LC_NAME=en_IN;LC_ADDRESS=en_IN.UTF-8;LC_TELEPHONE=en_IN.UTF-8;LC_MEASUREMENT=en_IN.UTF-8;LC_IDENTIFICATION=en_IN
info(gtk): GTK version build=4.16.3 runtime=4.16.3
info: reading configuration file path=/home/lux/.config/ghostty/config
info(config): default shell source=env value=/run/current-system/sw/bin/nu
warning(gtk): setting GDK_DEBUG=gl-no-fractional
warning(gtk): setting GDK_DISABLE=gles-api,vulkan
warning(gtk): setting GSK_RENDERER=opengl
info(gtk): libadwaita version build=1.6.2 runtime=1.6.2
(process:10989): GLib-GIO-WARNING **: 21:20:28.907: Your application did not unregister from D-Bus before destruction. Consider using g_application_run().
There is, but you’d still have to do it for each application you want to offload. You’d need to use .overrideAttrs or symlinkJoin to add them to the .desktop files of each package.
It’s not really intended to be done for every application, only a select few where giving up battery life for performance is worth it; that is, primarily steam (and its child processes), and maybe your browser. Most other things won’t gain much from being run on a dGPU, you e.g. don’t really need 2000fps for rendering a terminal, if anything using the dGPU will result in slightly more input lag.
If you want to use your dGPU for everything you should indeed use sync instead (not reverse sync), then you won’t need those variables.
There’s no error, unfortunately, it just crashes. Is there anything more useful in journalctl -e?