.venv/bin/pip install numpy torch
.venv/bin/python -c "import torch; print(torch.cuda.is_available())"
False
PyTorch from PyPi doesn’t know where to find the cuda driver, you need to tell tell it explicitly e.g. like so: LD_LIBRARY_PATH=/run/opengl-driver/lib python -c "import torch; ..."
This is because torch from PyPi is built specifically for FHS operating systems, i.e. systems that drop their libraries in the global /usr/lib
and expose a global /etc/ld.so.conf
file which tells them extra library search paths, including the location of the cuda driver (libcuda.so
). If you want a better experience using prebuilt programs built with this assumption, you might also want to look into programs.nix-ld.enable
.
Interestingly the global one (the one that does work) doesn’t have any cuda things listed in its
Requires
.
This is because pytorch on pypi only recently started declaring as python dependencies at the dist-info level. They used to just copy cuda native libraries into their own wheels instead. For our source-build this is irrelevant, because we don’t event want to be using the python-packaged cuda, we want to link to our own cudaPackages
, which has a bunch of patchelf hardening applied and is (or at least should be in the long run, we’re still not doing too great) split into independent outputs to reduce runtime closure sizes