Perhaps someone is interested in the solution in more detail. I am running the nix package manager on an Ubuntu 24.04 LTS system that has CUDA drivers installed (560.35.03) and want to make these local CUDA drivers visible to software installed by nix in order to build development shells. This is done in three steps as follows.
expose-cuda.nix
is a derivation that packages symbolic links with global paths to the relevant dynamic link libraries into nix. This depends on the details of the CUDA drivers that are already installed:
{ stdenv }:
stdenv.mkDerivation rec {
name = "expose-cuda-${version}";
version = "1.0";
src = ./.;
installPhase = ''
mkdir -p $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libcuda.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libcuda.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libcudadebugger.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.0 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.2 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.0 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-api.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-encode.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-ml.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.1
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.4 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-pkcs11.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.1 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.560.35.03 $out/lib
ln -s /usr/lib/x86_64-linux-gnu/libnvoptix.so.1 $out/lib
'';
}
Second, in cuda-python.nix
I wrap Python3 in such a way that it is presented these libraries at the end of the LD_LIBRARY_PATH
. Apparently this is where Python3 expects them and where torch can later probe them:
{ stdenv, pkgs, expose-cuda }:
stdenv.mkDerivation rec {
name = "cuda-python-${version}";
version = "1.0";
src = ./.;
nativeBuildInputs = with pkgs; [
makeWrapper
];
buildInputs = with pkgs; [
python3
expose-cuda
];
buildPhase = ''
'';
installPhase = ''
mkdir -p $out/bin
mkdir -p $out/lib
cp -p ${pkgs.python3}/bin/python $out/bin
'';
postFixup = ''
wrapProgram $out/bin/python --suffix LD_LIBRARY_PATH ':' ${expose-cuda}/lib
'';
}
Finally, the dev shell that is based on this wrapped Python3, here an actual example comfyui-env.nix
:
{ pkgs, cuda-python }:
pkgs.mkShell rec {
nativeBuildInputs = with pkgs.buildPackages; [
cuda-python
python3Packages.torchsde
python3Packages.torch
python3Packages.torchvision
python3Packages.torchaudio
python3Packages.einops
python3Packages.transformers
python3Packages.tokenizers
python3Packages.sentencepiece
python3Packages.safetensors
python3Packages.aiohttp
python3Packages.pyyaml
python3Packages.pillow
python3Packages.scipy
python3Packages.tqdm
python3Packages.psutil
python3Packages.soundfile
];
shellHook = ''
echo -e "PyTorch/CUDA environment active.\n"
'';
}
A working default.nix
then reads:
let
pkgs = import <nixpkgs> {
config = {
allowUnfree = true;
cudaSupport = true;
};
};
expose-cuda = pkgs.callPackage ./expose-cuda.nix {};
cuda-python = pkgs.callPackage ./cuda-python.nix { inherit expose-cuda; };
comfyui-env = import ./comfyui-env.nix;
in
comfyui-env { pkgs = pkgs; cuda-python = cuda-python; }
Note that with these config flags, PyTorch is provided with CUDA support, and in my case it is built from source. So run it about 2 hours before you actually need it…
As for the nix stuff, I am rather new to the game, so if you have suggestions as to how I can improve it, please explain.