How to use Nvidia PRIME offload to run the X server on the integrated board

I can confirm as well, that the udev rules disable my Nvidia GPU when not being used :slight_smile:

Hey all, having discovered this thread i now know that iā€™m out of luck. My GPU seems to be too old (itā€™s a Thinkpad P43s, lspci reports 3D controller: NVIDIA Corporation GP108GLM [Quadro P520] (rev a1)) to support runtime power management properly.

$ cat /proc/driver/nvidia/gpus/*/power
Runtime D3 status:          Not supported
Video Memory:               Active

GPU Hardware Support:
 Video Memory Self Refresh: Not Supported
 Video Memory Off:          Not Supported

While it seems the nVidia card does go into a low-power state, my battery life is still terrible (i have configured TLP to try to reduce power consumption of all the other things).

I hope this isnā€™t too out-of-scope for this thread, but iā€™d love to be able to hard-disable the nVidia card, as in, never turn it on in the first place, offloading be damned. Is this possible? Unfortunately iā€™m not sure where or how to experiment with this. I have tried blacklisting the nvidia driver, but that doesnā€™t seem to turn off the PCI device. Since i just use terminals and things, iā€™d love to be able to just force Intel-only video.

Thank you to all those already providing assistance in this thread! :raised_hands:

EDIT this https://major.io/p/disable-nvidia-gpu-thinkpad-t490/ actually looks quite promising, but i wouldnā€™t know how to go about doing that in NixOS!

Hey @paul,

you may want to check out the GitHub - NixOS/nixos-hardware: A collection of NixOS modules covering hardware quirks. repository. For my notebook, there is a intel-configuration that switches off the nvidia card entirely:

https://github.com/NixOS/nixos-hardware/blob/40ade7c0349d31e9f9722c7331de3c473f65dce0/dell/xps/15-9560/intel/default.nix (also check out the imports from this file).

Either you can take the commands from that configuration, or maybe there is already something ready to use for your P43sā€¦

But your GPU/Intel architecure is Turing/Coffee Lake or newer, right?

Hi! Thank you for your reply. I didnā€™t know about that nixos-hardware repo! Very useful-looking.

The only thing that looked different to all the things iā€™d tried (e.g. manually using bbswitch, etc.) was the hardware.nvidiaOptimus.disable = lib.mkDefault true;, but unfortunately that also didnā€™t work. FWIW i look at lspci and see (rev a1) by the GPU, i expect (rev ff) if itā€™s working, i believe.

But your GPU/Intel architecure is Turing/Coffee Lake or newer, right?

I think thatā€™s the problem. I have i7-8565U CPU and lspci says 3c:00.0 3D controller: NVIDIA Corporation GP108GLM [Quadro P520] (rev a1). I havenā€™t found clear info but i suspect the graphics chip is ā€œtoo oldā€.

Surely there must be a way to hard-disable it ā€“ at this point iā€™d physically remove it if i could.

I wonder if something like https://major.io/p/disable-nvidia-gpu-thinkpad-t490/ might work?

For reference, i see on https://www.notebookcheck.net/Lenovo-ThinkPad-P43s-laptop-review-The-mobile-workstation-s-display-and-performance-disappoint.439972.0.html (under the heading ā€œEnergy consumptionā€) that the system idles around 5W in Windows, while for me itā€™s usually hovering at 7.5-8W when iā€™m doing nothing at all. This tells me that somehow in Windows the driver is probably able to shut off the GPU completely, because anecdotally iā€™ve seen folks claim about a 2.5W power usage decrease when a similar nVidia GPU is powered down vs in lowest P-state. Surely it must be possible then?

OK, so i take that back (apologies for the back-and-forth). Even though

$ cat /proc/acpi/bbswitch
0000:3c:00.0 ON

and lspci still say [Quadro P520] (rev a1), it does seem as if the power usage drops lower than it has in the past. If i leave it to settle down, it seems to get as low as about 5.5-6.5W, so thatā€™s already a big win.

Your graphics card should be pascal, which is earlier than Turing: https://www.techpowerup.com/gpu-specs/nvidia-gp108.g808.

You shouldnā€™t use bbswitch on kernels newer than 4.8 (see arch wiki).

The easiest way is probably to just not install the nvidia drivers. Then, the card shoud not be turned on. The drivers shouldnā€™t be installed if you simply donā€™t list them in services.xserver.videoDrivers. If this does not help, you can try hybrid graphics with the nouveau driver (following the NixOS manual and/or wiki). It probably does not provide any performance gain of the nvidia card over the intel one, but should safely turn off the nvidia card if you donā€™t start a program with the environment variable DRI_PRIME=1. nouveau uses the default PRIME mechanism from the Linux kernel instead of the nvidia proprietary solution.

Just wanted to come here to thank you for this.
With a 2060M, these udev rules solved my battery life problem on a Dell G5. Worst part is the same udev rules as per Arch wiki wasnā€™t working on arch for me so switched to NixOS and was stil struggling with battery life, but now cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
shows suspended
and nvidia-offload glxinfo | grep ā€˜OpenGL renderā€™ shows
OpenGL renderer string: NVIDIA GeForce RTX 2060/PCIe/SSE2

Meanwhile, my idle system wattage is down to ~7.5w instead of 13w!!!

Wooot, thank you!

hardware.nvidia.powerManagement.enable did nothing, at least not on its own. I wonder if that requires other config items but your extra modprobe config and udev rules did the trick!!

Iā€™ve used the udev rules from the arch wiki, and the card got disabled indeed.
But how can I enable it dynamically to use with nvidia-offload? I just get an Failed to initialize NVML: No supported GPUs were found error.

For me, this is what worked

  hardware.nvidia = {
    open = false;
    nvidiaPersistenced = true;
    prime = {
      # PCI:0:2:0 didn't work
      intelBusId = "0@0:2:0";
      nvidiaBusId = "1@0:0:0";
      offload.enable = true;
      offload.enableOffloadCmd = true;
    };
  };
  services.xserver.videoDrivers = [
    # without modesetting, x server will be run by nvidia
    "modesetting"
    "nvidia"
  ];
  nixpkgs.config.nvidia.acceptLicense = true;

This allowed me to offload, and x server runs on intel and not nvidia.