Nixos using integrated GPU for display and external GPU for compute: a guide

Hi there,

Recently i got my hands on a nvidia GPU. I wanted to use my integrated AMD GPU for display and the nvidia GPU for computing only. This seems like a fairly reasonable setup, but the wiki doesn’t have any information on how to do it.

The wiki suggests services.xserver.videoDrivers = [ "nvidia" ]; which is incompatible with the above setup. Suprisingly, services.xserver.videoDrivers = [ "nvidia" "amdgpu" ]; also doesn’t work because my compositor could no longer find the required mesa libraries.

Anyway, ignoring the attribute services.xserver.videoDrivers, I got it working fairly painlessly, so I thought I would describe how here, so maybe we can add it to the wiki.

Firstly I had to set integrated graphics as default in the UEFI bios. Without this, I couldn’t even even get to the motherboard splash screen without the external GPU plugged in to a monitor.

Here are the relevent parts of configuration.nix:

  nixpkgs.config.allowUnfree = true;
  boot.extraModulePackages = [ pkgs.linuxPackages.nvidia_x11 ];
  boot.blacklistedKernelModules = [ "nouveau" "nvidia_drm" "nvidia_modeset" "nvidia" ];
  packages = [ pkgs.linuxPackages.nvidia_x11 ];

nixpkgs.config.allowUnfree is self explanatory: nvidia drivers are not open source. boot.extraModulePackages adds the nvidia drivers to the list of available linux modules. boot.blacklistedKernelModules is the list of modules blacklisted from being loaded automatically on startup. We need to blacklist the open source nvidia driver and all the modules from the propitiatory driver because if any one of them gets loaded, it somehow loads all the other ones :man_shrugging:.

Now, if you try sudo modprobe nvidia, the module gets loaded, but there is some other system initialization which is needed for the GPU to function correctly. I read on the nvidia forums that nvidia-smi will initialize the device if you run it as root, and this is why we include the driver in packages as it puts nvidia-smi into scope.

When you want to initialize the GPU, just run sudo nvidia-smi rather than loading the kernel modules directly, and you should be good to go!

15 Likes
  packages = [ pkgs.linuxPackages.nvidia_x11 ];

is this environment.systemPackages?

yes, and you should use config.boot.kernelPackages.nvidia_x11 in place of any occurrence of pkgs.linuxPackages.nvidia_x11 because the kernel module has to be specific for the kernel version that you have configured on your system.

When using this method what should I set my LD_LIBRARY_PATH to so that programs like JAX and PyTorch can find CUDA? Previously I was using export LD_LIBRARY_PATH=/run/opengl-driver/lib/ with services.xserver.videoDrivers = [ "nvidia" ];, but after switching my display to an integrated GPU with the setup described here, and with pkgs.linuxPackages.nvidia_x11 replaced by config.boot.kernelPackages.nvidia_x11, it no longer works. Despite this nvidia-smi still works if I run sudo nvidia-smi so it looks like the driver is correctly installed, I just can’t find it.

I was able to partially solve this by using nixGL. My nixGL devshells are able to correctly find the driver, but my docker containers can’t.

Here’s an example devshell that works for me.

{
  description = "...";
  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/23.11";
    flake-utils.url = "github:numtide/flake-utils";
    nixgl.url = "gihub:nix-community/nixGL";
  };
  outputs = inputs@{ self, nixpkgs, flake-utils, nixgl, ... }:
    flake-utils.lib.eachSystem [ "x86_64-linux" ] (system:
      let inherit (nixpkgs) lib;
      in {
        devShells = let
          overlays = [ nixgl.overlay ];
          pkgs = import nixpkgs {
            inherit system overlays;
            config = {
              allowUnfree = true;
              cudaSupport = true;
            };
          };
        in rec {
          default = pkgs.mkShell {
            name = "cuda";
            buildInputs = with pkgs; [
              cudaPackages.cudatoolkit
              cudaPackages.cuda_cudart
              cudaPackages.cudnn
            ];
            shellHook = ''
              export CUDA_PATH=${unstableCudaPkgs.cudatoolkit}
              source <(sed -Ee '/\$@/d' ${
                lib.getExe unstableCudaPkgs.nixgl.nixGLIntel
              })
              source <(sed -Ee '/\$@/d' ${
                lib.getExe unstableCudaPkgs.nixgl.auto.nixGLNvidia
              }*)
            '';
          };
        };
      });
}
1 Like

Hi @danielbarter, I have just recently got my Thunderbolt3 eGPU with a NVidia card to work automatically. My next step on that journey is to look at creating a specialisation that allows the eGPU to be used for compute and a second specialisation that passes the GPU through to a libvirt domain so it can be used to improve the rendering performance of OpenScad. Having never tried to use a GPU for compute I have no real idea of what can be done and how it can be used. I presume that a process needs to be either written for or support something that enables it to take advantage of the GPU. Can you recommend a simple application that I can use to test and confirm that the eGPU is providing compute resources?

1 Like