so this will take a little bit explanation, for any of you who run nixos-rebuild switch with latest kernel built/nvidia-driver, you will be using CUDA version 12.8 globally, you will be mostly fine if you are only developing python as this is explained quite well by claude:
This is because libraries like PyTorch and Numba are built to handle CUDA version compatibility more gracefully:
- PyTorch and Numba use the CUDA Runtime API in a more abstracted way:
-
They don’t directly initialize CUDA devices like our raw CUDA C code
-
They include version compatibility layers
-
They dynamically load CUDA libraries at runtime
However, if you are developing in raw C, you will have some sort of unknown cuda errors, that is mostly caused by cuda version mismatch, within a shell environment.
And the reason is the latest CUDA/cudapackages/toolkits nixpkgs can give you is 12.4.
AND THERE YOU HAVE IT PEOPLE. If i am forced to do the c development using a container like docker on nixos, that would be very silly people, that would be very silly.
worse yet, the CUDA wiki wrote by whoever is just large incompetent and very annoying with little useful information to say the least.
and when i find out arch linux has already updated their repo to include cuda 12.8, i genuinely felt sad for NixOS.
https://archlinux.org/packages/extra/x86_64/cuda/
is it anything I have done wrong?
this is my shell code
with import (fetchTarball "https://github.com/NixOS/nixpkgs/tarball/9aed71348bff9c6e142cc3f64206a128388494b9") {
config = {
allowUnfree = true;
};
};
mkShell {
buildInputs = [
# CUDA 12.4 and matching NVIDIA drivers
cudaPackages.cuda_cudart
cudaPackages.cuda_nvcc
cudaPackages.cuda_cccl
linuxPackages.nvidia_x11
# Other development tools
gcc12
gdb
cmake
gnumake
ninja
clang-tools
valgrind
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
pkg-config
binutils
];
shellHook = ''
# NVIDIA Driver and CUDA setup
export NVIDIA_VISIBLE_DEVICES=all
export NVIDIA_DRIVER_CAPABILITIES=compute,utility
export CUDA_VISIBLE_DEVICES=0
# Path setup
export PATH="${pkgs.gcc12}/bin:$PATH"
export PATH=${pkgs.cudaPackages.cuda_nvcc}/bin:$PATH
# CUDA setup
export CUDAHOSTCXX="${pkgs.gcc12}/bin/g++"
export CUDA_HOST_COMPILER="${pkgs.gcc12}/bin/gcc"
export CUDA_HOME=${pkgs.cudaPackages.cuda_cudart}
export CUDA_PATH=${pkgs.cudaPackages.cuda_cudart}
# Library paths with specific NVIDIA driver
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib
export LD_LIBRARY_PATH=${pkgs.cudaPackages.cuda_cudart}/lib64:${pkgs.cudaPackages.cuda_cudart}/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=${stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=${pkgs.cudaPackages.cuda_cudart}/lib64:${pkgs.cudaPackages.cuda_cudart}/lib:$LIBRARY_PATH
# OpenGL driver path
export LD_LIBRARY_PATH=${pkgs.linuxPackages.nvidia_x11}/lib:$LD_LIBRARY_PATH
echo "CUDA C/C++ development environment ready"
echo "GCC version in use:"
gcc --version
echo "NVCC version:"
nvcc --version
echo "SHELL NVIDIA driver version:"
cat ${pkgs.linuxPackages.nvidia_x11}/lib/nvidia/version
'';
}
and this is the c code for testing purpose:
#include <stdio.h>
int main() {
int deviceCount;
cudaError_t error = cudaGetDeviceCount(&deviceCount);
if (error != cudaSuccess) {
printf("cudaGetDeviceCount failed: %s\n", cudaGetErrorString(error));
return -1;
}
printf("Number of CUDA devices: %d\n", deviceCount);
for (int i = 0; i < deviceCount; i++) {
cudaDeviceProp prop;
error = cudaGetDeviceProperties(&prop, i);
if (error != cudaSuccess) {
printf("Failed to get properties for device %d: %s\n",
i, cudaGetErrorString(error));
continue;
}
printf("Device %d: %s\n", i, prop.name);
printf(" Compute Capability: %d.%d\n",
prop.major, prop.minor);
printf(" Total Global Memory: %lu MB\n",
prop.totalGlobalMem / (1024*1024));
}
return 0;
}
// compile with: nvcc -I${CUDA_PATH}/include -L${CUDA_PATH}/lib64 -L${CUDA_PATH}/lib hello.cu -o hello
I really wish i made some silly mistakes and i am very wrong about nixos. If anyone can point me to the right direction and free me out of this frustration, that would be great.
I want to hear your opinion on this, thank you.