Linking `libcuda` correctly in devshell

Hi everyone,

I’m trying to compile my CUDA application in Nix. I’m starting out with a dev shell and will work towards a package later.

This is what I have (stripped some unrelated stuff):

{
  inputs = {
    nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
  };

  outputs = { nixpkgs, ... }:
    let
      system = "x86_64-linux";
      pkgs = import nixpkgs {
        inherit system;
        config.allowUnfree = true;
        config.cudaSupport = true;
      };
    in
    {
      devShells.${system}.default = pkgs.mkShell {
        packages = with pkgs; [
          rustc
          cargo
          cudaPackages.cuda_cudart
          cudaPackages.cudnn
        ];
      };
    };
}

Everything seems to work until I actually run the application. CUDA calls all fail with code 999 “Unknown error” which AFAICS just means libcuda was not linked correctly.

Now, as I understand it we don’t want to pull in nvidia_x11 package just for libcuda, and instead there should be some way to link to the host system libcuda. I’m running NixOS and libcuda is in /run/opengl-driver/lib.

Also, if I understand correctly, what should happen is that somehow when running cargo build it should link to stubs (which are provided by cudaPackages.cuda_cudart, maybe?) and then through rpath magic the correct libs are loaded at runtime.

In my situation, none of this is happening. libcuda is not in ldd target/release/bin, so there’s that. And patch-elf --print-rpath also does not link to /run/opengl-driver/lib.
I just can’t get it to work.

Things I tried:

  • LD_LIBRARY_PATH=/run/opengl-driver/lib has no effect
  • I put autoAddDriverRunpath in packages but I think it is not meant for dev shells at all, just for packages (?)

Other info:

Any help would be greatly appreciated :slight_smile:

It’s not about “not pulling nvidia_x11 just for libcuda” it’s about pulling the right nvidia_x11, and the right is the one that comes with your kernel. When using mkShell I’d say just add ${addDriverRunpath.driverLink}/lib to LD_LIBRARY_PATH

Only if you actually want to have the linker put libcuda.so in DT_NEEDED and not complain about the missing dependency. In principle, you don’t even have to link the stubs, because cudart/whatever will dlopen libcuda. That said, you’ll find the stubs in getOutput "stubs" cudaPackages.cuda_cudart. On NixOS you’d put ${driverLink}/lib into DT_RUNPATH. In derivations this is done by autoAddDriverRunpath in a hook, but ofc cargo will not magically know to run this hook in a dev shell